Implied Volatility and Greeks of Index Options¶

In the article below, we will (i) automatically find the Option (of choice) closest to At The Money (ATM) and (ii) calculate its Implied Volatility and Greeks. We focus below on Future (Monthly) Options on the Index .STOXX50E (EURO STOXX 50 EUR PRICE INDEX) ('EUREX') and .SPX (S&P 500 INDEX), although you can apply the logic below for another index. To find the ATM instrument, we simply and efficiently use the Search API. Usually, the calculation of the Black-Scholes-Merton model's Implied Volatility involves numerical techniques, since it is not a closed equation (unless restricting assumptions that log returns follow a standard normal distribution with mean is zero, $\mu$ = 0, and standard deviation is zero, $\sigma$ = 1, are made). If we used these techniques in calculating each Implied Volatility value on our computer, it would take several seconds - if not minutes - for each data point computed. I have chosen to use the Instrument Pricing Analytics (IPA) service in the Refinitiv Data Platform API Family instead, as this service allows me to send model specifications (and variables) and receive several (up to 100) computed Implied Volatility values in one go - in a few seconds. Not only does this save a great deal of time, but also many lines of code!

In [1]:
import refinitiv.data as rd  # This is LSEG's Data and Analytics' API wrapper, called the Refinitiv Data Library for Python. you can update this library with the comand `!pip install refinitiv-data --upgrade`
from refinitiv.data.content import historical_pricing  # We will use this Python Class in `rd` to show the Implied Volatility data already available before our work.
from refinitiv.data.content import search  # We will use this Python Class in `rd` to fid the instrument we are after, closest to At The Money.
import refinitiv.data.content.ipa.financial_contracts as rdf  # We're going to need thtis to use the content layer of the RD library and the calculators of greeks and Impl Volat in Instrument Pricing Analytics (IPA) and Exchange Traded Instruments (ETI)
from refinitiv.data.content.ipa.financial_contracts import option  # We're going to need thtis to use the content layer of the RD library and the calculators of greeks and Impl Volat in IPA & ETI

import numpy as np  # We need `numpy` for mathematical and array manipilations.
import pandas as pd  # We need `pandas` for datafame and array manipilations.
import calendar  # We use `calendar` to identify holidays and maturity dates of intruments of interest.
import pytz  # We use `pytz` to manipulate time values aiding `calendar` library. to import its types, you might need to run `!python3 -m pip install types-pytz`
import pandas_market_calendars as mcal  # Used to identify holidays. See `https://github.com/rsheftel/pandas_market_calendars/blob/master/examples/usage.ipynb` for info on this market calendar library
from datetime import datetime, timedelta, timezone  # We use these to manipulate time values
from dateutil.relativedelta import relativedelta  # We use `relativedelta` to manipulate time values aiding `calendar` library.
import requests  # We'll need this to send requests to servers vie a the delivery layer - more on that below

# `plotly` is a library used to render interactive graphs:
import plotly.graph_objects as go
import plotly.express as px  # This is just to see the implied vol graph when that field is available
import matplotlib.pyplot as plt  # We use `matplotlib` to just in case users do not have an environment suited to `plotly`.
from IPython.display import clear_output, display  # We use `clear_output` for users who wish to loop graph production on a regular basis. We'll use this to `display` data (e.g.: pandas data-frames).
from plotly import subplots
import plotly

# Let's authenticate ourseves to LSEG's Data and Analytics service, Refinitiv:
try:  # The following libraries are not available in Codebook, thus this try loop
    rd.open_session(config_name="C:\\Example.DataLibrary.Python-main\\Example.DataLibrary.Python-main\\Configuration\\refinitiv-data.config.json")
    rd.open_session("desktop.workspace")
except:
    rd.open_session()
In [2]:
mcal.__version__
Out[2]:
'4.1.0'
In [3]:
print(f"Here we are using the refinitiv Data Library version {rd.__version__}")
Here we are using the refinitiv Data Library version 1.1.1

FYI (For Your Information): We are running Python 3.8:

In [4]:
!python -V
Python 3.8.2

EUREX Call Options¶

In this article, we will attempt to calculate the Implied Volatility (IV) for Future Options on 2 indexes (.STOXX50E & .SPX) trading 'ATM', meaning that the contract's strike price is at (or near - within x%) parity with (equal to) its current treading price (TRDPRC_1). We are also only looking for such Options expiring within a set time window; allowing for the option 'forever', i.e.: that expire whenever after date of calculation. To do so, we 1st have to find the option in question. To find live Options, we best use the Search API. To find Expired Options we will use functions created in Haykaz's amazing articles "Finding Expired Options and Backtesting a Short Iron Condor Strategy" & "Functions to find Option RICs traded on different exchanges"

Finding Live Options (using Search API)¶

Live Options, in this context, are Options that have not expired at time of computation. To be explicit:

  • 'time of calculation' refers here to the time for which the calculation is done, i.e.: if we compute today an IV for an Option as if it was 3 days ago, 'time of calculation' is 3 days ago.
  • 'time of computation' refers here to the time when we are computing the values, i.e.: if we compute today an IV for an Option as if it was 3 days ago, 'time of computation' is today.

As aforementioned, to find live Options, we best use the Search API: Here we look for options on .STOXX50E that mature on the 3rd friday of July 2023, 2023-07-21:

In [5]:
response1 = search.Definition(
    view = search.Views.SEARCH_ALL, # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
    query=".STOXX50E",
    select="DocumentTitle, RIC, StrikePrice, ExchangeCode, ExpiryDate, UnderlyingAsset, " +
            "UnderlyingAssetName, UnderlyingAssetRIC, ESMAUnderlyingIndexCode, RCSUnderlyingMarket" +
            "UnderlyingQuoteName, UnderlyingQuoteRIC, InsertDateTime, RetireDate",
    filter="RCSAssetCategoryLeaf eq 'Option' and RIC eq 'STX*' and DocumentTitle ne '*Weekly*'  " +
    "and CallPutOption eq 'Call' and ExchangeCode eq 'EUX' and " +
    "ExpiryDate ge 2022-07-10 and ExpiryDate lt 2023-07-22",  # ge (greater than or equal to), gt (greater than), lt (less than) and le (less than or equal to). These can only be applied to numeric and date properties.
    top=100).get_data()
searchDf1 = response1.data.df
In [6]:
searchDf1
Out[6]:
DocumentTitle RIC StrikePrice ExchangeCode ExpiryDate UnderlyingQuoteRIC InsertDateTime RetireDate
0 Eurex Monthly EURO STOXX 50 Index Option 4300 ... STXE43000E3.EX 4300 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:30:43 2023-05-23
1 Eurex Monthly EURO STOXX 50 Index Option 4375 ... STXE43750E3.EX 4375 EUX 2023-05-19 [.STOXX50E] 2023-03-09 03:46:05 2023-05-23
2 Eurex Monthly EURO STOXX 50 Index Option 4450 ... STXE44500E3.EX 4450 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:17:52 2023-05-23
3 Eurex Monthly EURO STOXX 50 Index Option 4250 ... STXE42500E3.EX 4250 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:16:37 2023-05-23
4 Eurex Monthly EURO STOXX 50 Index Option 4475 ... STXE44750E3.EX 4475 EUX 2023-05-19 [.STOXX50E] 2023-03-09 03:50:55 2023-05-23
... ... ... ... ... ... ... ... ...
95 Eurex Monthly EURO STOXX 50 Index Option 4875 ... STXE48750F3.EX 4875 EUX 2023-06-16 [.STOXX50E] 2023-03-09 04:00:59 2023-06-20
96 Eurex Monthly EURO STOXX 50 Index Option 4975 ... STXE49750F3.EX 4975 EUX 2023-06-16 [.STOXX50E] 2023-03-09 04:02:52 2023-06-20
97 Eurex Monthly EURO STOXX 50 Index Option 3925 ... STXE39250F3.EX 3925 EUX 2023-06-16 [.STOXX50E] 2023-03-09 04:05:25 2023-06-20
98 Eurex Monthly EURO STOXX 50 Index Option 4825 ... STXE48250F3.EX 4825 EUX 2023-06-16 [.STOXX50E] 2023-03-09 04:01:20 2023-06-20
99 Eurex Monthly EURO STOXX 50 Index Option 4925 ... STXE49250F3.EX 4925 EUX 2023-06-16 [.STOXX50E] 2023-03-09 04:00:33 2023-06-20

100 rows × 8 columns

Let's say the current underlying price is 3331.7EUR, now we can pick the option with strike price closest to that, i.e.: the most 'At The Money'; note that this means that the option can be in or out the money, as long as it is the closest to at the money:

In [7]:
currentUnderlyingPrc = rd.get_history(
    universe=[searchDf1.UnderlyingQuoteRIC[0][0]],
    fields=["TRDPRC_1"],
    interval="tick").iloc[-1][0]
In [8]:
currentUnderlyingPrc
Out[8]:
4340.93
In [9]:
searchDf1.iloc[(searchDf1['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]]
Out[9]:
DocumentTitle RIC StrikePrice ExchangeCode ExpiryDate UnderlyingQuoteRIC InsertDateTime RetireDate
67 Eurex Monthly EURO STOXX 50 Index Option 4350 ... STXE43500G3.EX 4350 EUX 2023-07-21 [.STOXX50E] 2023-03-09 04:01:57 2023-07-25

In this instance, for this Call Option, 'STXE33500G3.EX', the strike price is 3350, higher than the spot price of our underlying which is 3331.7. The holder of this 'STXE33500G3.EX' option has the right (but not the obligation) to buy the underlying for 3350EUR, which, was the price of the underlying to stay the same till expiry (3331.7EUR on 2023-07-21), means a loss of (3350 - 3331.7 =) 18.3EUR. This option in this instance is 'Out-The-Money'.

N.B.: When using the Filter in Search and playing with dates, it is good to read the API Playground Documentation; it mentions that: "Dates are written in ISO datetime format. The time portion is optional, as is the timezone (assumed to be UTC unless otherwise specified). Valid examples include 2012-03-11T17:13:55Z, 2012-03-11T17:13:55, 2012-03-11T12:00-03:30, 2012-03-11.":

Function for Expiration days¶

Most of the time, market agents will be interested in the next expiring Option, unless we are too close to it. We would not be interested, for example, in an option expiring in 1 hour, or even tomorrow, because that is so close (in time) that the information reflected in the Option's trades in the market does not represent future expectations of its underlying, but current expectations of it.

To implement such a logic, we need to know what are the expiry dates of the option that we are interested in. We are looking for a Python function narrowing our search to options expiring on the 3rd Friday of any one month. For info on this function, please read articles "Finding Expired Options and Backtesting a Short Iron Condor Strategy" & "Functions to find Option RICs traded on different exchanges"

In [10]:
def Get_exp_dates(year, days=True, mcal_get_calendar='EUREX'):
    '''
    Get_exp_dates Version 2.0:

    This function gets expiration dates for a year for NDX options, which are the 3rd Fridays of each month.

    Changes
    ----------------------------------------------
    Changed from Version 1.0 to 2.0: Jonathan Legrand changed Haykaz Aramyan's original code to allow
        (i) function name changed from `get_exp_dates` to `Get_exp_dates`
        (ii) for the function's holiday argument to be changed, allowing for any calendar supported by `mcal.get_calendar` and defaulted to 'EUREX' as opposed to 'CBOE_Index_Options' and
        (iii) for the function to output full date objects as opposed to just days of the month if agument days=True.

    Dependencies
    ----------------------------------------------
    Python library 'pandas_market_calendars' version '3.2'.
    pandas_market_calendars as mcal version '4.1.0'.

    Parameters
    -----------------------------------------------
    Input:
        year(int): year for which expiration days are requested

        mcal_get_calendar(str): String of the calendar for which holidays have to be taken into account. More on this calendar (link to Github checked 2022-10-11): https://github.com/rsheftel/pandas_market_calendars/blob/177e7922c7df5ad249b0d066b5c9e730a3ee8596/pandas_market_calendars/exchange_calendar_cboe.py
            Default: mcal_get_calendar='EUREX'

        days(bool): If True, only days of the month is outputed, else it's dataeime objects
            Default: days=True

    Output:
        dates(dict): dictionary of expiration days for each month of a specified year in datetime.date format.
    '''

    # get CBOE market holidays
    EUREXCal = mcal.get_calendar(mcal_get_calendar)
    holidays = EUREXCal.holidays().holidays

    # set calendar starting from Saturday
    c = calendar.Calendar(firstweekday=calendar.SATURDAY)

    # get the 3rd Friday of each month
    exp_dates = {}
    for i in range(1, 13):
        monthcal = c.monthdatescalendar(year, i)
        date = monthcal[2][-1]
        # check if found date is an holiday and get the previous date if it is
        if date in holidays:
            date = date + timedelta(-1)
        # append the date to the dictionary
        if year in exp_dates:
            ### Changed from original code from here on by Jonathan Legrand on 2022-10-11
            if days: exp_dates[year].append(date.day)
            else: exp_dates[year].append(date)
        else:
            if days: exp_dates[year] = [date.day]
            else: exp_dates[year] = [date]
    return exp_dates
In [11]:
fullDates = Get_exp_dates(2022, days=False)
dates = Get_exp_dates(2022)
fullDatesStrDict = {i: [fullDates[i][j].strftime('%Y-%m-%d')
                        for j in range(len(fullDates[i]))]
                    for i in list(fullDates.keys())}
fullDatesDayDict = {i: [fullDates[i][j].day
                        for j in range(len(fullDates[i]))]
                    for i in list(fullDates.keys())}
In [12]:
print(fullDates)
{2022: [datetime.date(2022, 1, 21), datetime.date(2022, 2, 18), datetime.date(2022, 3, 18), datetime.date(2022, 4, 14), datetime.date(2022, 5, 20), datetime.date(2022, 6, 17), datetime.date(2022, 7, 15), datetime.date(2022, 8, 19), datetime.date(2022, 9, 16), datetime.date(2022, 10, 21), datetime.date(2022, 11, 18), datetime.date(2022, 12, 16)]}
In [13]:
print(fullDatesStrDict)
{2022: ['2022-01-21', '2022-02-18', '2022-03-18', '2022-04-14', '2022-05-20', '2022-06-17', '2022-07-15', '2022-08-19', '2022-09-16', '2022-10-21', '2022-11-18', '2022-12-16']}
In [14]:
print(dates)
{2022: [21, 18, 18, 14, 20, 17, 15, 19, 16, 21, 18, 16]}
In [15]:
print(fullDatesDayDict)
{2022: [21, 18, 18, 14, 20, 17, 15, 19, 16, 21, 18, 16]}

Function to find the next expiring Option outside the next x day window¶

Most of the time, market agents will be interested in the next expiring Option, unless we are too close to it. We would not be interested, for example, in an option expiring in 1 hour, or even tomorrow, because that is so close (in time) that the information reflected in the Option's trades in the market does not represent future expectations of its underlying, but current expectations of it.

E.g.: I would like to know what is the next Future (Monthly) Option (i) on the Index '.STOXX50E' (ii) closest to ATM (i.e.: with an underlying spot price closest to the option's strike price) (ii) Expiring in more than x days (i.e.: not too close to calculated time 't'), let's say 15 days:

In [16]:
x = 15
In [17]:
timeOfCalcDatetime = datetime.now()  # For now, we will focuss on the use-case where we are calculating values for today; later we will allow for it historically for any day going back a few business days.
timeOfCalcStr = datetime.now().strftime('%Y-%m-%d')
timeOfCalcStr
Out[17]:
'2023-04-26'
In [18]:
fullDatesAtTimeOfCalc = Get_exp_dates(timeOfCalcDatetime.year, days=False)  # `timeOfCalcDatetime.year` here is 2023
fullDatesAtTimeOfCalcDatetime = [
    datetime(i.year, i.month, i.day)
    for i in fullDatesAtTimeOfCalc[list(fullDatesAtTimeOfCalc.keys())[0]]]
In [19]:
print(fullDatesAtTimeOfCalcDatetime)
[datetime.datetime(2023, 1, 20, 0, 0), datetime.datetime(2023, 2, 17, 0, 0), datetime.datetime(2023, 3, 17, 0, 0), datetime.datetime(2023, 4, 21, 0, 0), datetime.datetime(2023, 5, 19, 0, 0), datetime.datetime(2023, 6, 16, 0, 0), datetime.datetime(2023, 7, 21, 0, 0), datetime.datetime(2023, 8, 18, 0, 0), datetime.datetime(2023, 9, 15, 0, 0), datetime.datetime(2023, 10, 20, 0, 0), datetime.datetime(2023, 11, 17, 0, 0), datetime.datetime(2023, 12, 15, 0, 0)]
In [20]:
expiryDateOfInt = [i for i in fullDatesAtTimeOfCalcDatetime
                   if i > timeOfCalcDatetime + relativedelta(days=x)][0]
expiryDateOfInt
Out[20]:
datetime.datetime(2023, 5, 19, 0, 0)

Now we can look for the one option we're after:

In [21]:
response2 = search.Definition(
    view=search.Views.SEARCH_ALL, # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
    query=".STOXX50E",
    select="DocumentTitle, RIC, StrikePrice, ExchangeCode, ExpiryDate, UnderlyingAsset, " +
            "UnderlyingAssetName, UnderlyingAssetRIC, ESMAUnderlyingIndexCode, RCSUnderlyingMarket" +
            "UnderlyingQuoteName, UnderlyingQuoteRIC, InsertDateTime, RetireDate",
    filter="RCSAssetCategoryLeaf eq 'Option' and RIC eq 'STX*' and DocumentTitle ne '*Weekly*' " +
    "and CallPutOption eq 'Call' and ExchangeCode eq 'EUX' and " +
    f"ExpiryDate ge {(expiryDateOfInt - relativedelta(days=1)).strftime('%Y-%m-%d')} " +
    f"and ExpiryDate lt {(expiryDateOfInt + relativedelta(days=1)).strftime('%Y-%m-%d')}",  # ge (greater than or equal to), gt (greater than), lt (less than) and le (less than or equal to). These can only be applied to numeric and date properties.
    top=10000,
).get_data()
searchDf2 = response2.data.df
In [22]:
searchDf2
Out[22]:
DocumentTitle RIC StrikePrice ExchangeCode ExpiryDate UnderlyingQuoteRIC InsertDateTime RetireDate
0 Eurex Monthly EURO STOXX 50 Index Option 4300 ... STXE43000E3.EX 4300 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:30:43 2023-05-23
1 Eurex Monthly EURO STOXX 50 Index Option 4375 ... STXE43750E3.EX 4375 EUX 2023-05-19 [.STOXX50E] 2023-03-09 03:46:05 2023-05-23
2 Eurex Monthly EURO STOXX 50 Index Option 4450 ... STXE44500E3.EX 4450 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:17:52 2023-05-23
3 Eurex Monthly EURO STOXX 50 Index Option 4250 ... STXE42500E3.EX 4250 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:16:37 2023-05-23
4 Eurex Monthly EURO STOXX 50 Index Option 4475 ... STXE44750E3.EX 4475 EUX 2023-05-19 [.STOXX50E] 2023-03-09 03:50:55 2023-05-23
... ... ... ... ... ... ... ... ...
141 Eurex Monthly EURO STOXX 50 Index Option 3175 ... STXE31750E3.EX 3175 EUX 2023-05-19 [.STOXX50E] 2023-03-09 03:54:41 2023-05-23
142 Eurex Monthly EURO STOXX 50 Index Option 2475 ... STXE24750E3.EX 2475 EUX 2023-05-19 [.STOXX50E] 2023-03-09 03:49:26 2023-05-23
143 Eurex Monthly EURO STOXX 50 Index Option 5350 ... STXE53500E3.EX 5350 EUX 2023-05-19 [.STOXX50E] 2023-04-14 01:13:19 2023-05-23
144 Eurex Monthly EURO STOXX 50 Index Option 5375 ... STXE53750E3.EX 5375 EUX 2023-05-19 [.STOXX50E] 2023-04-14 01:13:18 2023-05-23
145 Eurex Monthly EURO STOXX 50 Index Option 5400 ... STXE54000E3.EX 5400 EUX 2023-05-19 [.STOXX50E] 2023-04-17 00:16:44 2023-05-23

146 rows × 8 columns

And again, we can collect the closest to ATM:

In [23]:
searchDf2.iloc[(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]]
Out[23]:
DocumentTitle RIC StrikePrice ExchangeCode ExpiryDate UnderlyingQuoteRIC InsertDateTime RetireDate
6 Eurex Monthly EURO STOXX 50 Index Option 4350 ... STXE43500E3.EX 4350 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:19:00 2023-05-23

Now we have our instrument:

In [24]:
instrument = searchDf2.iloc[(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]].RIC.values[0]
instrument
Out[24]:
'STXE43500E3.EX'

Refinitiv-provided Daily Implied Volatility¶

Refinitiv provides pre-calculated Implied Volatility values, but they are daily, and we will look into calculating them in higher frequencies:

In [25]:
## Example Options:

# instrument_1 = 'SPXv212240000.U'
# instrument_2 = 'STXE35500J2.EX'  # Eurex Dow Jones EURO STOXX 50 Index Option 3550 Call Oct 2022, Stock Index Cash Option, Underlying RIC: .STOXX50E
# instrument_3 = 'SPXj212240000.U'
In [26]:
datetime.now().isoformat(timespec='minutes')
Out[26]:
'2023-04-26T12:03'
In [27]:
start = (timeOfCalcDatetime - pd.tseries.offsets.BDay(5)).strftime('%Y-%m-%dT%H:%M:%S.%f')  # '2022-10-05T07:30:00.000'
endDateTime = datetime.now()
end = endDateTime.strftime('%Y-%m-%dT%H:%M:%S.%f')  #  e.g.: '2022-09-09T20:00:00.000'
end
Out[27]:
'2023-04-26T12:03:02.958946'
In [28]:
_RefDailyImpVolDf = historical_pricing.events.Definition(
    instrument, fields=['IMP_VOLT'], count=2000).get_data()
In [29]:
_RefDailyImpVolDf.data.df.head()
Out[29]:
STXE43500E3.EX IMP_VOLT
Timestamp
2023-01-25 00:53:36.445 11.9003
2023-01-26 00:53:38.882 11.9305
2023-01-27 00:53:32.994 11.5065
2023-01-28 00:53:37.312 11.0921
2023-01-31 00:53:38.561 11.7701
In [30]:
try: RefDailyImpVolDf = _RefDailyImpVolDf.data.df.drop(['EVENT_TYPE'], axis=1)  # In codebook, this line is needed
except: RefDailyImpVolDf = _RefDailyImpVolDf.data.df # If outside of codebook
fig = px.line(RefDailyImpVolDf, title = RefDailyImpVolDf.columns.name + " " + RefDailyImpVolDf.columns[0]) # This is just to see the implied vol graph when that field is available
fig.show()

Option Price¶

In [31]:
# rd.get_history(
#     universe=["STXE35500J2.EX"],
#     fields=["TRDPRC_1"],
#     interval="tick")
In [32]:
_optnMrktPrice = rd.get_history(
    universe=[instrument],
    fields=["TRDPRC_1"],
    interval="10min",
    start=start,  # Ought to always start at 4 am for OPRA exchanged Options, more info in the article below
    end=end)  # Ought to always end at 8 pm for OPRA exchanged Options, more info in the article below

As you can see, there isn't nessesarily a trade every 10 min.:

In [33]:
_optnMrktPrice.head()
Out[33]:
STXE43500E3.EX TRDPRC_1
Timestamp
2023-04-19 13:00:00 61.5
2023-04-19 15:10:00 65.8
2023-04-19 15:20:00 64.8
2023-04-20 07:30:00 57.1
2023-04-20 07:40:00 59.5

However, for the statistical inferences that we will make further in the article, when we will calculate Implied Volatilities and therefore implement the Black Scholes model, we will need 'continuous timeseries' with which to deal. There are several ways to go from discrete time series (like ours, even if we go down to tick data), but for this article, we will 1st focus on making 'buckets' of 10 min. If no trade is made in any 10 min. bucket, we will assume the price to have stayed the same as previously, throughout the exchange's trading hours which are:

  • 4am to 8pm ET for OPRA and
  • typically 7:30am to 22:00 CET at the Eurex Exchange (EUREX)

thankfully this is simple. Let's stick with the EUREX for now:

In [34]:
optnMrktPrice = _optnMrktPrice.resample('10Min').mean() # get a datapoint every 10 min
optnMrktPrice = optnMrktPrice[optnMrktPrice.index.strftime('%Y-%m-%d').isin([i for i in _optnMrktPrice.index.strftime('%Y-%m-%d').unique()])]  # Only keep trading days
optnMrktPrice = optnMrktPrice.loc[(optnMrktPrice.index.strftime('%H:%M:%S') >= '07:30:00') & (optnMrktPrice.index.strftime('%H:%M:%S') <= '22:00:00')]  # Only keep trading hours
optnMrktPrice.fillna(method='ffill', inplace=True)  # Forward Fill to populate NaN values
print(f"Our dataframe started at {str(optnMrktPrice.index[0])} and went on continuously till {str(optnMrktPrice.index[-1])}, so out of trading hours rows are removed")
optnMrktPrice
Our dataframe started at 2023-04-19 13:00:00 and went on continuously till 2023-04-26 08:30:00, so out of trading hours rows are removed
Out[34]:
STXE43500E3.EX TRDPRC_1
Timestamp
2023-04-19 13:00:00 61.5
2023-04-19 13:10:00 61.5
2023-04-19 13:20:00 61.5
2023-04-19 13:30:00 61.5
2023-04-19 13:40:00 61.5
... ...
2023-04-26 07:50:00 38.5
2023-04-26 08:00:00 38.8
2023-04-26 08:10:00 38.2
2023-04-26 08:20:00 38.2
2023-04-26 08:30:00 45.8

414 rows × 1 columns

Note that the option might not have traded in the past 10 min. This can cause issues in the code below, we thus ought to add a row for the current time:

In [35]:
# optnMrktPrice = optnMrktPrice.append(
#     pd.DataFrame(
#         [[pd.NA]], columns=optnMrktPrice.columns,
#         index=[(endDateTime + (datetime.min - endDateTime) % timedelta(minutes=10))]))
# optnMrktPrice

Note also that one may want to only look at 'At Option Trade' datapoints, i.e.: Implied Volatility when a trade is made for the Option, but not when none is made. For this, we will use the 'At Trade' (AT) dataframes:

In [36]:
AToptnMrktPrice = _optnMrktPrice
AToptnMrktPrice
Out[36]:
STXE43500E3.EX TRDPRC_1
Timestamp
2023-04-19 13:00:00 61.5
2023-04-19 15:10:00 65.8
2023-04-19 15:20:00 64.8
2023-04-20 07:30:00 57.1
2023-04-20 07:40:00 59.5
... ...
2023-04-26 07:40:00 40.8
2023-04-26 07:50:00 38.5
2023-04-26 08:00:00 38.8
2023-04-26 08:10:00 38.2
2023-04-26 08:30:00 45.8

67 rows × 1 columns

Underlying Asset Price¶

Now let's get data for the underying, which we need to calculate IV:

In [37]:
underlying = searchDf2.iloc[(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]].UnderlyingQuoteRIC.values[0][0]
underlying
Out[37]:
'.STOXX50E'

Opening Times Of Any One Exchange¶

If you are interested in the opening times of any one exchange, you can use the following:

In [38]:
hoursDf = rd.get_data(
    universe=["EUREX21"],
    fields=["ROW80_10"])
display(hoursDf)
hoursDf.iloc[0,1]
Instrument ROW80_10
0 EUREX21 OGBL/OGBM/OGBS 07:30-08:00 08:0...
Out[38]:
'       OGBL/OGBM/OGBS     07:30-08:00     08:00-19:00     19:00-20:00           '
In [39]:
_underlyingMrktPrice = rd.get_history(
    universe=[underlying],
    fields=["TRDPRC_1"],
    interval="10min",
    start=start,
    end=end)
In [40]:
_underlyingMrktPrice
Out[40]:
.STOXX50E TRDPRC_1
Timestamp
2023-04-19 12:10:00 4382.8
2023-04-19 12:20:00 4385.49
2023-04-19 12:30:00 4384.22
2023-04-19 12:40:00 4383.8
2023-04-19 12:50:00 4383.17
... ...
2023-04-26 09:20:00 4345.69
2023-04-26 09:30:00 4345.49
2023-04-26 09:40:00 4345.46
2023-04-26 09:50:00 4339.81
2023-04-26 10:00:00 4340.53

253 rows × 1 columns

In [41]:
ATunderlyingMrktPrice = AToptnMrktPrice.join(
    _underlyingMrktPrice, lsuffix='_OptPr', rsuffix='_UnderlyingPr', how='inner')
ATunderlyingMrktPrice.head(2)
Out[41]:
TRDPRC_1_OptPr TRDPRC_1_UnderlyingPr
Timestamp
2023-04-19 13:00:00 61.5 4386.34
2023-04-19 15:10:00 65.8 4396.66

Let's put it all in one data-frame, df. Some datasets will have data going from the time we sort for start all the way to end. Some won't because no trade happened in the past few minutes/hours. We ought to base ourselves on the dataset with values getting closer to end and ffill for the other column. As a result, the following if loop is needed:

In [42]:
if optnMrktPrice.index[-1] >= _underlyingMrktPrice.index[-1]:
    df = optnMrktPrice.copy()
    df['underlying ' + underlying + ' TRDPRC_1'] = _underlyingMrktPrice
else:
    df = _underlyingMrktPrice.copy()
    df.rename(columns={"TRDPRC_1": 'underlying ' + underlying + ' TRDPRC_1'}, inplace=True)
    df['TRDPRC_1'] = optnMrktPrice
    df.columns.name = optnMrktPrice.columns.name
df.fillna(method='ffill', inplace=True)  # Forward Fill to populate NaN values
df = df.dropna()
df
Out[42]:
STXE43500E3.EX underlying .STOXX50E TRDPRC_1 TRDPRC_1
Timestamp
2023-04-19 13:00:00 4386.34 61.5
2023-04-19 13:10:00 4387.22 61.5
2023-04-19 13:20:00 4386.43 61.5
2023-04-19 13:30:00 4388.84 61.5
2023-04-19 13:40:00 4388.19 61.5
... ... ...
2023-04-26 09:20:00 4345.69 45.8
2023-04-26 09:30:00 4345.49 45.8
2023-04-26 09:40:00 4345.46 45.8
2023-04-26 09:50:00 4339.81 45.8
2023-04-26 10:00:00 4340.53 45.8

248 rows × 2 columns

Strike Price¶

In [43]:
strikePrice = searchDf2.iloc[
    (searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]].StrikePrice.values[0]
In [44]:
strikePrice
Out[44]:
4350

Risk-Free Interest Rate¶

In [45]:
(datetime.strptime(start, '%Y-%m-%dT%H:%M:%S.%f') - timedelta(days=1)).strftime('%Y-%m-%d')
Out[45]:
'2023-04-18'
In [46]:
rd.get_history(
    universe=['EURIBOR3MD='],
    fields=['TR.FIXINGVALUE'],
    start='2023-03-18',
    end='2023-04-27')
Out[46]:
EURIBOR3MD= Fixing Value
Date
2023-03-20 2.892
2023-03-21 2.908
2023-03-22 3.002
2023-03-23 2.99
2023-03-24 3.025
2023-03-27 3.012
2023-03-28 2.99
2023-03-29 3.015
2023-03-30 3.052
2023-03-31 3.038
2023-04-03 3.053
2023-04-04 3.052
2023-04-05 3.055
2023-04-06 3.075
2023-04-06 3.075
2023-04-06 3.075
2023-04-11 3.108
2023-04-12 3.126
2023-04-13 3.177
2023-04-14 3.175
2023-04-17 3.219
2023-04-18 3.2
2023-04-19 3.205
2023-04-20 3.211
2023-04-21 3.261
2023-04-24 3.288
2023-04-25 3.268
2023-04-26 3.242
In [47]:
_EurRfRate = rd.get_history(
    universe=['EURIBOR3MD='],  # USD3MFSR=, USDSOFR=
    fields=['TR.FIXINGVALUE'],
    # Since we will use `dropna()` as a way to select the rows we are after later on in the code, we need to ask for more risk-free data than needed, just in case we don't have enough:
    start=(datetime.strptime(start, '%Y-%m-%dT%H:%M:%S.%f') - timedelta(days=1)).strftime('%Y-%m-%d'),
    end=(datetime.strptime(end, '%Y-%m-%dT%H:%M:%S.%f') + timedelta(days=1)).strftime('%Y-%m-%d'))
In [48]:
_EurRfRate
Out[48]:
EURIBOR3MD= Fixing Value
Date
2023-04-18 3.2
2023-04-19 3.205
2023-04-20 3.211
2023-04-21 3.261
2023-04-24 3.288
2023-04-25 3.268
2023-04-26 3.242

Euribor values are released daily at 11am CET, and it is published as such on Refinitiv:

In [49]:
_EurRfRate
Out[49]:
EURIBOR3MD= Fixing Value
Date
2023-04-18 3.2
2023-04-19 3.205
2023-04-20 3.211
2023-04-21 3.261
2023-04-24 3.288
2023-04-25 3.268
2023-04-26 3.242
In [50]:
EurRfRate = _EurRfRate.resample('10Min').mean().fillna(method='ffill')
df['EurRfRate'] = EurRfRate

You might be running your code after the latest Risk Free Rate published, so the most accurate such value after taht would be the latest value, thus the use of ffill:

In [51]:
df = df.fillna(method='ffill')
df
Out[51]:
STXE43500E3.EX underlying .STOXX50E TRDPRC_1 TRDPRC_1 EurRfRate
Timestamp
2023-04-19 13:00:00 4386.34 61.5 3.205
2023-04-19 13:10:00 4387.22 61.5 3.205
2023-04-19 13:20:00 4386.43 61.5 3.205
2023-04-19 13:30:00 4388.84 61.5 3.205
2023-04-19 13:40:00 4388.19 61.5 3.205
... ... ... ...
2023-04-26 09:20:00 4345.69 45.8 3.268
2023-04-26 09:30:00 4345.49 45.8 3.268
2023-04-26 09:40:00 4345.46 45.8 3.268
2023-04-26 09:50:00 4339.81 45.8 3.268
2023-04-26 10:00:00 4340.53 45.8 3.268

248 rows × 3 columns

Now for the At Trade dataframe:

In [52]:
pd.options.mode.chained_assignment = None  # default='warn'
ATunderlyingMrktPrice['EurRfRate'] = [pd.NA for i in ATunderlyingMrktPrice.index]
for i in _EurRfRate.index:
    _i = str(i)[:10]
    for n, j in enumerate(ATunderlyingMrktPrice.index):
        if _i in str(j):
            if len(_EurRfRate.loc[i].values) == 2:
                ATunderlyingMrktPrice['EurRfRate'].iloc[n] = _EurRfRate.loc[i].values[0][0]
            elif len(_EurRfRate.loc[i].values) == 1:
                ATunderlyingMrktPrice['EurRfRate'].iloc[n] = _EurRfRate.loc[i].values[0]
ATdf = ATunderlyingMrktPrice.copy()

Again, you might be running your code after the latest Risk Free Rate published, so the most accurate such value after that would be the latest value, thus the use of ffill:

In [53]:
ATdf = ATdf.fillna(method='ffill')
ATdf.head(2)
Out[53]:
TRDPRC_1_OptPr TRDPRC_1_UnderlyingPr EurRfRate
Timestamp
2023-04-19 13:00:00 61.5 4386.34 3.205
2023-04-19 15:10:00 65.8 4396.66 3.205

Annualized Continuous Dividend Rate¶

We are going to assume no dividends.

Calculating IV¶

On the Developer Portal, one can see documentation about the Instrument Pricing Analytics service that allows access to calculating functions (that use to be called 'AdFin'). This service is accessible via several RESTful endpoints (in a family of endpoints called 'Quantitative Analytics') which can be used via RD. However, While we are going to build towards a Class that will put all our concepts together, I 1st want to showcase the several ways in which we can collect the data we're are after, for (i) all trades & (ii) at option trades only (i.e.: not every trade of the underlying) and (a) using the RD delivery layer & (b) the RD content layer:

Data returned this far was time-stamped in the GMT Time Zone, we need to re-calibrate it to the timezone of our machine:

All Trades¶

In [54]:
dfGMT = df.copy()
dfLocalTimeZone = df.copy()
dfLocalTimeZone.index = [
    df.index[i].replace(
        tzinfo=pytz.timezone(
            'GMT')).astimezone(
        tz=datetime.now().astimezone().tzinfo)
    for i in range(len(df))]
In [55]:
dfGMT
Out[55]:
STXE43500E3.EX underlying .STOXX50E TRDPRC_1 TRDPRC_1 EurRfRate
Timestamp
2023-04-19 13:00:00 4386.34 61.5 3.205
2023-04-19 13:10:00 4387.22 61.5 3.205
2023-04-19 13:20:00 4386.43 61.5 3.205
2023-04-19 13:30:00 4388.84 61.5 3.205
2023-04-19 13:40:00 4388.19 61.5 3.205
... ... ... ...
2023-04-26 09:20:00 4345.69 45.8 3.268
2023-04-26 09:30:00 4345.49 45.8 3.268
2023-04-26 09:40:00 4345.46 45.8 3.268
2023-04-26 09:50:00 4339.81 45.8 3.268
2023-04-26 10:00:00 4340.53 45.8 3.268

248 rows × 3 columns

In [56]:
dfLocalTimeZone
Out[56]:
STXE43500E3.EX underlying .STOXX50E TRDPRC_1 TRDPRC_1 EurRfRate
2023-04-19 15:00:00+02:00 4386.34 61.5 3.205
2023-04-19 15:10:00+02:00 4387.22 61.5 3.205
2023-04-19 15:20:00+02:00 4386.43 61.5 3.205
2023-04-19 15:30:00+02:00 4388.84 61.5 3.205
2023-04-19 15:40:00+02:00 4388.19 61.5 3.205
... ... ... ...
2023-04-26 11:20:00+02:00 4345.69 45.8 3.268
2023-04-26 11:30:00+02:00 4345.49 45.8 3.268
2023-04-26 11:40:00+02:00 4345.46 45.8 3.268
2023-04-26 11:50:00+02:00 4339.81 45.8 3.268
2023-04-26 12:00:00+02:00 4340.53 45.8 3.268

248 rows × 3 columns

In [57]:
requestFields = [
    "MarketValueInDealCcy", "RiskFreeRatePercent",
    "UnderlyingPrice", "PricingModelType",
    "DividendType",
    "UnderlyingTimeStamp", "ReportCcy",
    "VolatilityType", "Volatility",
    "DeltaPercent", "GammaPercent",
    "RhoPercent", "ThetaPercent",
    "VegaPercent"]

Delivery Layer¶

Now for the At Trade dataframe:

In [58]:
universeL = [
        {
          "instrumentType": "Option",
          "instrumentDefinition": {
            "buySell": "Buy",
            "underlyingType": "Eti",
            "instrumentCode": instrument,
            "strike": str(strikePrice),
          },
          "pricingParameters": {
            "marketValueInDealCcy": str(dfLocalTimeZone['TRDPRC_1'][i]),
            "riskFreeRatePercent": str(dfLocalTimeZone['EurRfRate'][i]),
            "underlyingPrice": str(dfLocalTimeZone['underlying ' + underlying + ' TRDPRC_1'][i]),
            "pricingModelType": "BlackScholes",
            "dividendType": "ImpliedYield",
            "volatilityType": "Implied",
            "underlyingTimeStamp": "Default",
            "reportCcy": "EUR"
          }
        }
      for i in range(len(dfLocalTimeZone.index))]
In [59]:
def Chunks(lst, n):
    """Yield successive n-sized chunks from lst."""
    for i in range(0, len(lst), n):
        yield lst[i:i + n]

This is the cell, next coming up below, that has a rather high chance of failing. This is because there is no error handling of any kind, just in case there are issues on the servers where we are retreiving data. The COntent Layer functions do have such error handing steps, and therefore is considerably less likely to fail or run into errors.

In [60]:
batchOf = 100
for i, j in enumerate(Chunks(universeL, batchOf)):
    print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(universeL, batchOf)]))} started")
    # Example request with Body Parameter - Symbology Lookup
    request_definition = rd.delivery.endpoint_request.Definition(
        method=rd.delivery.endpoint_request.RequestMethod.POST,
        url='https://api.refinitiv.com/data/quantitative-analytics/v1/financial-contracts',
        body_parameters={"fields": requestFields,
                         "outputs": ["Data", "Headers"],
                         "universe": j})

    response3 = request_definition.get_data()
    headers_name = [h['name'] for h in response3.data.raw['headers']]

    if i == 0:
        response3df = pd.DataFrame(
            data=response3.data.raw['data'], columns=headers_name)
    else:
        _response3df = pd.DataFrame(
            data=response3.data.raw['data'], columns=headers_name)
        response3df = response3df.append(_response3df, ignore_index=True)
    # display(_response3df)
    print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(universeL, batchOf)]))} ended")
Batch of 100 requests no. 1/3 started
Batch of 100 requests no. 1/3 ended
Batch of 100 requests no. 2/3 started
Batch of 100 requests no. 2/3 ended
Batch of 100 requests no. 3/3 started
Batch of 100 requests no. 3/3 ended
In [61]:
response3df
Out[61]:
MarketValueInDealCcy RiskFreeRatePercent UnderlyingPrice PricingModelType DividendType UnderlyingTimeStamp ReportCcy VolatilityType Volatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
0 61.5 3.205 4386.34 BlackScholes ImpliedYield Default EUR Calculated 14.289085 0.498699 0.002509 1.339649 -0.526622 4.346190
1 61.5 3.205 4387.22 BlackScholes ImpliedYield Default EUR Calculated 14.187898 0.500850 0.002526 1.345872 -0.513544 4.346759
2 61.5 3.205 4386.43 BlackScholes ImpliedYield Default EUR Calculated 14.278755 0.498918 0.002511 1.340282 -0.525291 4.346254
3 61.5 3.205 4388.84 BlackScholes ImpliedYield Default EUR Calculated 14.000505 0.504891 0.002558 1.357558 -0.489109 4.347447
4 61.5 3.205 4388.19 BlackScholes ImpliedYield Default EUR Calculated 14.075870 0.503257 0.002545 1.352833 -0.498970 4.347229
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
243 45.8 3.268 4345.69 BlackScholes ImpliedYield Default EUR Calculated 14.872271 0.401943 0.002366 1.071814 -0.699170 4.186606
244 45.8 3.268 4345.49 BlackScholes ImpliedYield Default EUR Calculated 14.891467 0.401607 0.002362 1.070841 -0.701147 4.185540
245 45.8 3.268 4345.46 BlackScholes ImpliedYield Default EUR Calculated 14.894345 0.401556 0.002362 1.070696 -0.701443 4.185380
246 45.8 3.268 4339.81 BlackScholes ImpliedYield Default EUR Calculated 15.432100 0.392392 0.002269 1.044204 -0.755817 4.154954
247 45.8 3.268 4340.53 BlackScholes ImpliedYield Default EUR Calculated 15.364038 0.393524 0.002280 1.047479 -0.749039 4.158861

248 rows × 14 columns

Content Layer¶

As may (or may not) have been apparent aboe, the delivery layer does not offer any error hendling management. The server where we're requestig for data may be busy, so we may get unsuccessful messages back. You could build error handing logic yourself, but let's not reinvent the wheel when the RD Python Library exists!

In [62]:
dfLocalTimeZone
Out[62]:
STXE43500E3.EX underlying .STOXX50E TRDPRC_1 TRDPRC_1 EurRfRate
2023-04-19 15:00:00+02:00 4386.34 61.5 3.205
2023-04-19 15:10:00+02:00 4387.22 61.5 3.205
2023-04-19 15:20:00+02:00 4386.43 61.5 3.205
2023-04-19 15:30:00+02:00 4388.84 61.5 3.205
2023-04-19 15:40:00+02:00 4388.19 61.5 3.205
... ... ... ...
2023-04-26 11:20:00+02:00 4345.69 45.8 3.268
2023-04-26 11:30:00+02:00 4345.49 45.8 3.268
2023-04-26 11:40:00+02:00 4345.46 45.8 3.268
2023-04-26 11:50:00+02:00 4339.81 45.8 3.268
2023-04-26 12:00:00+02:00 4340.53 45.8 3.268

248 rows × 3 columns

In [63]:
CuniverseL = [  # C here is for the fact that we're using the content layer
    option.Definition(
        underlying_type=option.UnderlyingType.ETI,
        buy_sell='Buy',
        instrument_code=instrument,
        strike=float(strikePrice),
        pricing_parameters=option.PricingParameters(
            market_value_in_deal_ccy=float(dfLocalTimeZone['TRDPRC_1'][i]),
            risk_free_rate_percent=float(dfLocalTimeZone['EurRfRate'][i]),
            underlying_price=float(dfLocalTimeZone[
                'underlying ' + underlying + ' TRDPRC_1'][i]),
            pricing_model_type='BlackScholes',
            volatility_type='Implied',
            underlying_time_stamp='Default',
            report_ccy='EUR'))
    for i in range(len(dfLocalTimeZone.index))]
In [64]:
batchOf = 100
for i, j in enumerate(Chunks(CuniverseL, batchOf)):
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(CuniverseL, 100)])} started")
    # Example request with Body Parameter - Symbology Lookup
    response4 = rdf.Definitions(universe=j, fields=requestFields)
    response4 = response4.get_data()
    if i == 0:
        response4df = response4.data.df
    else:
        response4df = response4df.append(response4.data.df, ignore_index=True)
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(CuniverseL, 100)])} ended")
Batch of 100 requests no. 1/3 started
Batch of 100 requests no. 1/3 ended
Batch of 100 requests no. 2/3 started
Batch of 100 requests no. 2/3 ended
Batch of 48 requests no. 3/3 started
Batch of 48 requests no. 3/3 ended
In [65]:
IPADf = response4df.copy()  # IPA here stands for the service we used to get all the calculated valuse, Instrument Pricint Analitycs.
IPADf.index = dfLocalTimeZone.index
IPADf.columns.name = dfLocalTimeZone.columns.name
IPADf.rename(columns={"Volatility": 'ImpliedVolatility'}, inplace=True)
IPADf
Out[65]:
STXE43500E3.EX MarketValueInDealCcy RiskFreeRatePercent UnderlyingPrice PricingModelType DividendType UnderlyingTimeStamp ReportCcy VolatilityType ImpliedVolatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
2023-04-19 15:00:00+02:00 61.5 3.205 4386.34 BlackScholes ImpliedYield Default EUR Calculated 14.289085 0.498699 0.002509 1.339649 -0.526622 4.34619
2023-04-19 15:10:00+02:00 61.5 3.205 4387.22 BlackScholes ImpliedYield Default EUR Calculated 14.187898 0.50085 0.002526 1.345872 -0.513544 4.346759
2023-04-19 15:20:00+02:00 61.5 3.205 4386.43 BlackScholes ImpliedYield Default EUR Calculated 14.278755 0.498918 0.002511 1.340282 -0.525291 4.346254
2023-04-19 15:30:00+02:00 61.5 3.205 4388.84 BlackScholes ImpliedYield Default EUR Calculated 14.000505 0.504891 0.002558 1.357558 -0.489109 4.347447
2023-04-19 15:40:00+02:00 61.5 3.205 4388.19 BlackScholes ImpliedYield Default EUR Calculated 14.07587 0.503257 0.002545 1.352833 -0.49897 4.347229
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2023-04-26 11:20:00+02:00 45.8 3.268 4345.69 BlackScholes ImpliedYield Default EUR Calculated 14.872271 0.401943 0.002366 1.071814 -0.69917 4.186606
2023-04-26 11:30:00+02:00 45.8 3.268 4345.49 BlackScholes ImpliedYield Default EUR Calculated 14.891467 0.401607 0.002362 1.070841 -0.701147 4.18554
2023-04-26 11:40:00+02:00 45.8 3.268 4345.46 BlackScholes ImpliedYield Default EUR Calculated 14.894345 0.401556 0.002362 1.070696 -0.701443 4.18538
2023-04-26 11:50:00+02:00 45.8 3.268 4339.81 BlackScholes ImpliedYield Default EUR Calculated 15.4321 0.392392 0.002269 1.044204 -0.755817 4.154954
2023-04-26 12:00:00+02:00 45.8 3.268 4340.53 BlackScholes ImpliedYield Default EUR Calculated 15.364038 0.393524 0.00228 1.047479 -0.749039 4.158861

248 rows × 14 columns

At Option Trade Only¶

In [66]:
ATdfGMT = ATdf.copy()
ATdfLocalTimeZone = ATdf.copy()
ATdfLocalTimeZone.index = [
    ATdf.index[i].replace(
        tzinfo=pytz.timezone(
            'GMT')).astimezone(
        tz=datetime.now().astimezone().tzinfo)
    for i in range(len(ATdf))]
ATdfGMT
Out[66]:
TRDPRC_1_OptPr TRDPRC_1_UnderlyingPr EurRfRate
Timestamp
2023-04-19 13:00:00 61.5 4386.34 3.205
2023-04-19 15:10:00 65.8 4396.66 3.205
2023-04-19 15:20:00 64.8 4394.17 3.205
2023-04-20 07:30:00 57.1 4376.91 3.211
2023-04-20 07:40:00 59.5 4381.26 3.211
... ... ... ...
2023-04-26 07:40:00 40.8 4329.96 3.242
2023-04-26 07:50:00 38.5 4332.83 3.242
2023-04-26 08:00:00 38.8 4326.07 3.242
2023-04-26 08:10:00 38.2 4334.58 3.242
2023-04-26 08:30:00 45.8 4348.25 3.242

67 rows × 3 columns

In [67]:
ATdfLocalTimeZone
Out[67]:
TRDPRC_1_OptPr TRDPRC_1_UnderlyingPr EurRfRate
2023-04-19 15:00:00+02:00 61.5 4386.34 3.205
2023-04-19 17:10:00+02:00 65.8 4396.66 3.205
2023-04-19 17:20:00+02:00 64.8 4394.17 3.205
2023-04-20 09:30:00+02:00 57.1 4376.91 3.211
2023-04-20 09:40:00+02:00 59.5 4381.26 3.211
... ... ... ...
2023-04-26 09:40:00+02:00 40.8 4329.96 3.242
2023-04-26 09:50:00+02:00 38.5 4332.83 3.242
2023-04-26 10:00:00+02:00 38.8 4326.07 3.242
2023-04-26 10:10:00+02:00 38.2 4334.58 3.242
2023-04-26 10:30:00+02:00 45.8 4348.25 3.242

67 rows × 3 columns

Delivery Layer¶

In [68]:
ATuniverseL = [
        {
          "instrumentType": "Option",
          "instrumentDefinition": {
            "buySell": "Buy",
            "underlyingType": "Eti",
            "instrumentCode": instrument,
            "strike": str(strikePrice),
          },
          "pricingParameters": {
            "marketValueInDealCcy": str(ATdfLocalTimeZone['TRDPRC_1_OptPr'][i]),
            "riskFreeRatePercent": str(ATdfLocalTimeZone['EurRfRate'][i]),
            "underlyingPrice": str(ATdfLocalTimeZone['TRDPRC_1_UnderlyingPr'][i]),
            "pricingModelType": "BlackScholes",
            "dividendType": "ImpliedYield",
            "volatilityType": "Implied",
            "underlyingTimeStamp": "Default",
            "reportCcy": "EUR"
          }
        }
      for i in range(len(ATdfLocalTimeZone.index))]

Content Layer¶

In [69]:
ATCUniverseL = [  # C here is for the fact that we're using the content layer
    option.Definition(
        underlying_type=option.UnderlyingType.ETI,
        buy_sell='Buy',
        instrument_code=instrument,
        strike=float(strikePrice),
        pricing_parameters=option.PricingParameters(
            market_value_in_deal_ccy=float(ATdfLocalTimeZone['TRDPRC_1_OptPr'][i]),
            risk_free_rate_percent=float(ATdfLocalTimeZone['EurRfRate'][i]),
            underlying_price=float(ATdfLocalTimeZone['TRDPRC_1_UnderlyingPr'][i]),
            pricing_model_type='BlackScholes',
            volatility_type='Implied',
            underlying_time_stamp='Default',
            report_ccy='EUR'))
    for i in range(len(ATdfLocalTimeZone.index))]
In [70]:
batchOf = 100
for i, j in enumerate(Chunks(ATCUniverseL, batchOf)):
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(ATCUniverseL, batchOf)])} started")
    # Example request with Body Parameter - Symbology Lookup
    response5 = rdf.Definitions(
        universe=j,
        fields=requestFields)
    response5 = response5.get_data()
    if i == 0:
        response5df = response5.data.df
    else:
        response5df = response5df.append(response5.data.df, ignore_index=True)
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(ATCUniverseL, batchOf)])} ended")
Batch of 67 requests no. 1/1 started
Batch of 67 requests no. 1/1 ended
In [71]:
response5df.head(2)
Out[71]:
MarketValueInDealCcy RiskFreeRatePercent UnderlyingPrice PricingModelType DividendType UnderlyingTimeStamp ReportCcy VolatilityType Volatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
0 61.5 3.205 4386.34 BlackScholes ImpliedYield Default EUR Calculated 14.289085 0.498699 0.002509 1.339649 -0.526622 4.34619
1 65.8 3.205 4396.66 BlackScholes ImpliedYield Default EUR Calculated 14.06356 0.524785 0.002536 1.412453 -0.459454 4.344004
In [72]:
ATIPADf = response5df.copy()  # IPA here stands for the service we used to get all the calculated valuse, Instrument Pricint Analitycs.
ATIPADf.index = ATdfLocalTimeZone.index
ATIPADf.columns.name = ATdfLocalTimeZone.columns.name
ATIPADf.rename(columns={"Volatility": 'ImpliedVolatility'}, inplace=True)
ATIPADf.head(2)
Out[72]:
MarketValueInDealCcy RiskFreeRatePercent UnderlyingPrice PricingModelType DividendType UnderlyingTimeStamp ReportCcy VolatilityType ImpliedVolatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
2023-04-19 15:00:00+02:00 61.5 3.205 4386.34 BlackScholes ImpliedYield Default EUR Calculated 14.289085 0.498699 0.002509 1.339649 -0.526622 4.34619
2023-04-19 17:10:00+02:00 65.8 3.205 4396.66 BlackScholes ImpliedYield Default EUR Calculated 14.06356 0.524785 0.002536 1.412453 -0.459454 4.344004

Graphs¶

Overlay¶

From now on we will not show AT dataframe equivalents because it is... equivalent!

In [73]:
display(searchDf2.iloc[
    (searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]])

IPADfGraph = IPADf[['ImpliedVolatility', 'MarketValueInDealCcy',
                    'RiskFreeRatePercent', 'UnderlyingPrice', 'DeltaPercent',
                    'GammaPercent', 'RhoPercent', 'ThetaPercent', 'VegaPercent']]

fig = px.line(IPADfGraph)  # This is just to see the implied vol graph when that field is available
# fig.layout = dict(xaxis=dict(type="category"))

# Format Graph: https://plotly.com/python/tick-formatting/
fig.update_layout(
    title=instrument,
    template='plotly_dark')

# Make it so that only one line is shown by default: # https://stackoverflow.com/questions/73384807/plotly-express-plot-subset-of-dataframe-columns-by-default-and-the-rest-as-opt
fig.for_each_trace(
    lambda t: t.update(
        visible=True if t.name in IPADfGraph.columns[:1] else "legendonly"))

# fig.update_xaxes(autorange=True)
# fig.update_layout(yaxis=IPADf.index[0::10])

fig.show()
DocumentTitle RIC StrikePrice ExchangeCode ExpiryDate UnderlyingQuoteRIC InsertDateTime RetireDate
6 Eurex Monthly EURO STOXX 50 Index Option 4350 ... STXE43500E3.EX 4350 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:19:00 2023-05-23

Sack of 3 Graphs¶

This representation will allow us to see several graphs at different scales stacked above one another. This way, we can see if the change in Implied Volatility is caused by a movement in the underlying or the Option price itself:

In [74]:
fig = subplots.make_subplots(rows=3, cols=1)

fig.add_trace(go.Scatter(x=IPADf.index, y=IPADf.ImpliedVolatility, name='Op Imp Volatility'), row=1, col=1)
fig.add_trace(go.Scatter(x=IPADf.index, y=IPADf.MarketValueInDealCcy, name='Op Mk Pr'), row=2, col=1)
fig.add_trace(go.Scatter(x=IPADf.index, y=IPADf.UnderlyingPrice, name=underlying + ' Undrlyg Pr'), row=3, col=1)


fig.update(layout_xaxis_rangeslider_visible=False)
fig.update_layout(title=IPADf.columns.name)
fig.update_layout(
    template='plotly_dark',
    autosize=False,
    width=1300,
    height=500)
fig.show()
searchDf2.iloc[(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]]
Out[74]:
DocumentTitle RIC StrikePrice ExchangeCode ExpiryDate UnderlyingQuoteRIC InsertDateTime RetireDate
6 Eurex Monthly EURO STOXX 50 Index Option 4350 ... STXE43500E3.EX 4350 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:19:00 2023-05-23

Simple Graph¶

Certain companies are slow to update libraries, dependencies or Python versions. They/You may thus not have access to plotly (the graph library we used above). Matplotlib is rather light and should work, even on machines with old setups:

In [75]:
display(searchDf2.iloc[(searchDf2.StrikePrice-currentUnderlyingPrc).abs().argsort()[:1]])
ATIPADfSimpleGraph = pd.DataFrame(
    data=ATIPADf.ImpliedVolatility.values, index=ATIPADf.ImpliedVolatility.index)

fig, ax = plt.subplots(ncols=1)

ax.plot(ATIPADfSimpleGraph, '.-')
# ax.xaxis.set_major_formatter(ticker.FuncFormatter(format_date))
ax.set_title(f"{searchDf2.iloc[(searchDf2.StrikePrice-currentUnderlyingPrc).abs().argsort()[:1]].RIC.values[0]} Implied Volatility At Trade Only")
fig.autofmt_xdate()

plt.show()
DocumentTitle RIC StrikePrice ExchangeCode ExpiryDate UnderlyingQuoteRIC InsertDateTime RetireDate
6 Eurex Monthly EURO STOXX 50 Index Option 4350 ... STXE43500E3.EX 4350 EUX 2023-05-19 [.STOXX50E] 2023-03-09 04:19:00 2023-05-23

Note here that we are looking only 'At Trade', i.e.: times when the option traded, not the underlying. There are therefore fewer datapoints.

EUREX, or SPX Call or Put Options¶

Let's put it all together into a single function. This ImpVolatilityCalcIPA function will allow anyone to:

(I) find the option (i) with the index of your choice (SPX or EUREX) as underlying, (ii) closest to strike price right now (i.e.: At The Money) and (iii) with the next, closest expiry date past x days after today,

(II) calculate the Implied Volatility for that option either (i) only at times when the option itself is traded or (ii) at any time the option or the underlying is being traded.

In [76]:
def ImpVolatilityCalcIPA(x=15,
                         instrument=None,
                         indexUnderlying=".STOXX50E",
                         callOrPut='Put',
                         dateBack=3,
                         expiryYearOfInterest=datetime.now().year,
                         riskFreeRate=None, riskFreeRateField=None,
                         timeZoneInGraph=datetime.now().astimezone(),
                         maxColwidth=200,
                         graphStyle='overlay',  # 'overlay', '3 graphs', 'simple'
                         simpleGraphLineStyle='.-',  # 'o-'
                         simpleGraphSize=(15, 5),
                         graphTemplate='plotly_dark',
                         debug=False,
                         returnDfGraph=False,
                         AtOptionTradeOnly=False):


    if indexUnderlying == ".STOXX50E":
        exchangeC, exchangeRIC, mcalGetCalendar = 'EUX', 'STX', 'EUREX'
    elif indexUnderlying == '.SPX':
        exchangeC, exchangeRIC, mcalGetCalendar = 'OPQ', 'SPX', 'CBOE_Futures'  # 'CBOE_Index_Options'  # should be 'CBOE_Index_Options'... CBOT_Equity


    def Get_exp_dates(year=expiryYearOfInterest,
                      days=True,
                      mcal_get_calendar=mcalGetCalendar):
        '''
        Get_exp_dates Version 3.0:

        This function gets expiration dates for a year for NDX options, which are the 3rd Fridays of each month.

        Changes
        ----------------------------------------------
        Changed from Version 1.0 to 2.0: Jonathan Legrand changed Haykaz Aramyan's original code to allow
            (i) for the function's holiday argument to be changed, and defaulted to 'EUREX' as opposed to 'CBOE_Index_Options' and
            (ii) for the function to output full date objects as opposed to just days of the month if agument days=True.

        Changed from Version 2.0 to 3.0: Jonathan Legrand changed this function to reflec tthe fact that it can be used for indexes other than EUREX.

        Dependencies
        ----------------------------------------------
        Python library 'pandas_market_calendars' version 3.2

        Parameters
        -----------------------------------------------
        Input:
            year(int): year for which expiration days are requested

            mcal_get_calendar(str):
                String of the calendar for which holidays have to be taken into account.
                More on this calendar (link to Github checked 2022-10-11): https://github.com/rsheftel/pandas_market_calendars/blob/177e7922c7df5ad249b0d066b5c9e730a3ee8596/pandas_market_calendars/exchange_calendar_cboe.py
                Default: mcal_get_calendar='EUREX'

            days(bool): If True, only days of the month is outputed, else it's dataeime objects
                Default: days=True

        Output:
            dates(dict): dictionary of expiration days for each month of a specified year in datetime.date format.
        '''

        # get CBOE market holidays
        Cal = mcal.get_calendar(mcal_get_calendar)
        holidays = Cal.holidays().holidays

        # set calendar starting from Saturday
        c = calendar.Calendar(firstweekday=calendar.SATURDAY)

        # get the 3rd Friday of each month
        exp_dates = {}
        for i in range(1, 13):
            monthcal = c.monthdatescalendar(year, i)
            date = monthcal[2][-1]
            # check if found date is an holiday and get the previous date if it is
            if date in holidays:
                date = date + timedelta(-1)
            # append the date to the dictionary
            if year in exp_dates:
                ### Changed from original code from here on by Jonathan Legrand on 2022-10-11
                if days: exp_dates[year].append(date.day)
                else: exp_dates[year].append(date)
            else:
                if days: exp_dates[year] = [date.day]
                else: exp_dates[year] = [date]
        return exp_dates

    timeOfCalcDatetime = datetime.now()  # For now, we will focuss on the use-case where we are calculating values for today; later we will allow for it historically for any day going back a few business days.
    timeOfCalcStr = datetime.now().strftime('%Y-%m-%d')
    fullDatesAtTimeOfCalc = Get_exp_dates(timeOfCalcDatetime.year, days=False)  # `timeOfCalcDatetime.year` here is 2023
    fullDatesAtTimeOfCalcDatetime = [
        datetime(i.year, i.month, i.day)
        for i in fullDatesAtTimeOfCalc[list(fullDatesAtTimeOfCalc.keys())[0]]]
    expiryDateOfInt = [i for i in fullDatesAtTimeOfCalcDatetime
                       if i > timeOfCalcDatetime + relativedelta(days=x)][0]

    if debug: print(f"expiryDateOfInt: {expiryDateOfInt}")

    response = search.Definition(
        view = search.Views.SEARCH_ALL, # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
        query=indexUnderlying,
        select="DocumentTitle, RIC, StrikePrice, ExchangeCode, ExpiryDate, UnderlyingAsset, " +
                "UnderlyingAssetName, UnderlyingAssetRIC, ESMAUnderlyingIndexCode, RCSUnderlyingMarket" +
                "UnderlyingQuoteName, UnderlyingQuoteRIC, InsertDateTime, RetireDate",
        filter=f"RCSAssetCategoryLeaf eq 'Option' and RIC eq '{exchangeRIC}*' and DocumentTitle ne '*Weekly*' " +
        f"and CallPutOption eq '{callOrPut}' and ExchangeCode eq '{exchangeC}' and " +
        f"ExpiryDate ge {(expiryDateOfInt - relativedelta(days=1)).strftime('%Y-%m-%d')} " +
        f"and ExpiryDate lt {(expiryDateOfInt + relativedelta(days=1)).strftime('%Y-%m-%d')}",  # ge (greater than or equal to), gt (greater than), lt (less than) and le (less than or equal to). These can only be applied to numeric and date properties.
        top=10000,
    ).get_data()
    searchDf = response.data.df

    if debug: display(searchDf)

    try:
        underlyingPrice =  rd.get_history(
            universe=[indexUnderlying],
            fields=["TRDPRC_1"],
            interval="tick").iloc[-1][0]
    except:
        print("Function failed at the search strage, returning the following dataframe: ")
        display(searchDf)

    if debug:
        print(f"Underlying {indexUnderlying}'s price recoprded here was {underlyingPrice}")
        display(searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[:10]])

    if instrument is None:
        instrument = searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[:1]].RIC.values[0]

    start = (timeOfCalcDatetime - pd.tseries.offsets.BDay(dateBack)).strftime('%Y-%m-%dT%H:%M:%S.%f')  # '2022-10-05T07:30:00.000'
    endDateTime = datetime.now()
    end = endDateTime.strftime('%Y-%m-%dT%H:%M:%S.%f')  #  e.g.: '2022-09-09T20:00:00.000'

    _optnMrktPrice = rd.get_history(
        universe=[instrument],
        fields=["TRDPRC_1"],
        interval="10min",
        start=start,  # Ought to always start at 4 am for OPRA exchanged Options, more info in the article below
        end=end)  # Ought to always end at 8 pm for OPRA exchanged Options, more info in the article below

    if debug:
        print(instrument)
        display(_optnMrktPrice)

    ## Data on certain options are stale and do not nessesarily show up on Workspace, in case that happens, we will pick the next ATM Option, which probably will have the same strike, but we will only do so once, any more and we could get too far from strike:
    if _optnMrktPrice.empty:
        if debug: print(f"No data could be found for {instrument}, so the next ATM Option was chosen")
        instrument = searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[1:2]].RIC.values[0]
        if debug: print(f"{instrument}")
        _optnMrktPrice = rd.get_history(universe=[instrument],
                                        fields=["TRDPRC_1"], interval="10min",
                                        start=start, end=end)
        if debug: display(_optnMrktPrice)
    if _optnMrktPrice.empty:  # Let's try one more time, as is often nessesary
        if debug: print(f"No data could be found for {instrument}, so the next ATM Option was chosen")
        instrument = searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[2:3]].RIC.values[0]
        if debug: print(f"{instrument}")
        _optnMrktPrice = rd.get_history(universe=[instrument],
                                        fields=["TRDPRC_1"], interval="10min",
                                        start=start, end=end)
        if debug: display(_optnMrktPrice)
    if _optnMrktPrice.empty:
        print(f"No data could be found for {instrument}, please check it on Refinitiv Workspace")

    optnMrktPrice = _optnMrktPrice.resample('10Min').mean() # get a datapoint every 10 min
    optnMrktPrice = optnMrktPrice[optnMrktPrice.index.strftime('%Y-%m-%d').isin([i for i in _optnMrktPrice.index.strftime('%Y-%m-%d').unique()])]  # Only keep trading days
    optnMrktPrice = optnMrktPrice.loc[(optnMrktPrice.index.strftime('%H:%M:%S') >= '07:30:00') & (optnMrktPrice.index.strftime('%H:%M:%S') <= '22:00:00')]  # Only keep trading hours
    optnMrktPrice.fillna(method='ffill', inplace=True)  # Forward Fill to populate NaN values

    # Note also that one may want to only look at 'At Option Trade' datapoints,
    # i.e.: Implied Volatility when a trade is made for the Option, but not when
    # none is made. For this, we will use the 'At Trade' (`AT`) dataframes:
    if AtOptionTradeOnly: AToptnMrktPrice = _optnMrktPrice

    underlying = searchDf.iloc[(searchDf.StrikePrice).abs().argsort()[:1]].UnderlyingQuoteRIC.values[0][0]

    _underlyingMrktPrice = rd.get_history(
        universe=[underlying],
        fields=["TRDPRC_1"],
        interval="10min",
        start=start,
        end=end)
    # Let's put it al in one data-frame, `df`. Some datasets will have data
    # going from the time we sert for `start` all the way to `end`. Some won't
    # because no trade happened in the past few minutes/hours. We ought to base
    # ourselves on the dataset with values getting closer to `end` and `ffill`
    # for the other column. As a result, the following `if` loop is needed:
    if optnMrktPrice.index[-1] >= _underlyingMrktPrice.index[-1]:
        df = optnMrktPrice.copy()
        df[f"underlying {underlying} TRDPRC_1"] = _underlyingMrktPrice
    else:
        df = _underlyingMrktPrice.copy()
        df.rename(
            columns={"TRDPRC_1": f"underlying {underlying} TRDPRC_1"},
            inplace=True)
        df['TRDPRC_1'] = optnMrktPrice
        df.columns.name = optnMrktPrice.columns.name
    df.fillna(method='ffill', inplace=True)  # Forward Fill to populate NaN values
    df = df.dropna()

    if AtOptionTradeOnly:
        ATunderlyingMrktPrice = AToptnMrktPrice.join(
            _underlyingMrktPrice, lsuffix='_OptPr', how='inner',
            rsuffix=f" Underlying {underlying} TRDPRC_1")
        ATdf = ATunderlyingMrktPrice

    strikePrice = searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[:1]].StrikePrice.values[0]

    if riskFreeRate is None and indexUnderlying == ".SPX":
        _riskFreeRate = 'USDCFCFCTSA3M='
        _riskFreeRateField = 'TR.FIXINGVALUE'
    elif riskFreeRate is None and indexUnderlying == ".STOXX50E":
        _riskFreeRate = 'EURIBOR3MD='
        _riskFreeRateField = 'TR.FIXINGVALUE'
    else:
        _riskFreeRate, _riskFreeRateField = riskFreeRate, riskFreeRateField

    _RfRate = rd.get_history(
        universe=[_riskFreeRate],  # USD3MFSR=, USDSOFR=
        fields=[_riskFreeRateField],
        # Since we will use `dropna()` as a way to select the rows we are after later on in the code, we need to ask for more risk-free data than needed, just in case we don't have enough:
        start=(datetime.strptime(start, '%Y-%m-%dT%H:%M:%S.%f') - timedelta(days=1)).strftime('%Y-%m-%d'),
        end=(datetime.strptime(end, '%Y-%m-%dT%H:%M:%S.%f') + timedelta(days=1)).strftime('%Y-%m-%d'))
    RfRate = _RfRate.resample('10Min').mean().fillna(method='ffill')
    if AtOptionTradeOnly:
        pd.options.mode.chained_assignment = None  # default='warn'
        ATunderlyingMrktPrice['RfRate'] = [pd.NA for i in ATunderlyingMrktPrice.index]
        for i in RfRate.index:
            _i = str(i)[:10]
            for n, j in enumerate(ATunderlyingMrktPrice.index):
                if _i in str(j):
                    if len(RfRate.loc[i].values) == 2:
                        ATunderlyingMrktPrice['RfRate'].iloc[n] = RfRate.loc[i].values[0][0]
                    elif len(RfRate.loc[i].values) == 1:
                        ATunderlyingMrktPrice['RfRate'].iloc[n] = RfRate.loc[i].values[0]
        ATdf = ATunderlyingMrktPrice.copy()
        ATdf = ATdf.fillna(method='ffill')  # This is in case there were no Risk Free datapoints released after a certain time, but trades on the option still went through.
    else:
        df['RfRate'] = RfRate
        df = df.fillna(method='ffill')

    if timeZoneInGraph != 'GMT' and AtOptionTradeOnly:
        ATdf.index = [
            ATdf.index[i].replace(
                tzinfo=pytz.timezone(
                    'GMT')).astimezone(
                tz=datetime.now().astimezone().tzinfo)
            for i in range(len(ATdf))]
    elif timeZoneInGraph != 'GMT':
        df.index = [
            df.index[i].replace(
                tzinfo=pytz.timezone(
                    'GMT')).astimezone(
                tz=timeZoneInGraph.tzinfo)
            for i in range(len(df))]

    if AtOptionTradeOnly:
        if debug:
            print("ATdf")
            display(ATdf)
        universeL = [
            option.Definition(
                underlying_type=option.UnderlyingType.ETI,
                buy_sell='Buy',
                instrument_code=instrument,
                strike=float(strikePrice),
                pricing_parameters=option.PricingParameters(
                    market_value_in_deal_ccy=float(ATdf.TRDPRC_1_OptPr[i]),
                    risk_free_rate_percent=float(ATdf.RfRate[i]),
                    underlying_price=float(ATdf[
                        f"TRDPRC_1 Underlying {underlying} TRDPRC_1"][i]),
                    pricing_model_type='BlackScholes',
                    volatility_type='Implied',
                    underlying_time_stamp='Default',
                    report_ccy='EUR'))
            for i in range(len(ATdf.index))]
    else:
        if debug:
            print("df")
            display(df)
        universeL = [  # C here is for the fact that we're using the content layer
            option.Definition(
                underlying_type=option.UnderlyingType.ETI,
                buy_sell='Buy',
                instrument_code=instrument,
                strike=float(strikePrice),
                pricing_parameters=option.PricingParameters(
                    market_value_in_deal_ccy=float(df.TRDPRC_1[i]),
                    risk_free_rate_percent=float(df.RfRate[i]),
                    underlying_price=float(df[
                        f"underlying {underlying} TRDPRC_1"][i]),
                    pricing_model_type='BlackScholes',
                    volatility_type='Implied',
                    underlying_time_stamp='Default',
                    report_ccy='EUR'))
            for i in range(len(df.index))]

    def Chunks(lst, n):
        """Yield successive n-sized chunks from lst."""
        for i in range(0, len(lst), n):
            yield lst[i:i + n]

    requestFields = [
        "MarketValueInDealCcy", "RiskFreeRatePercent",
        "UnderlyingPrice", "PricingModelType",
        "DividendType", "UnderlyingTimeStamp",
        "ReportCcy", "VolatilityType",
        "Volatility", "DeltaPercent", "GammaPercent",
        "RhoPercent", "ThetaPercent", "VegaPercent"]

    batchOf = 100
    for i, j in enumerate(Chunks(universeL, batchOf)):
        if debug: print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(universeL, batchOf)])} started")
        # Example request with Body Parameter - Symbology Lookup
        request_definition = rdf.Definitions(universe=j, fields=requestFields)
        response = request_definition.get_data()
        if i == 0:
            IPADf = response.data.df
        else:
            IPADf = IPADf.append(response.data.df, ignore_index=True)
        if debug: print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(universeL, batchOf)])} ended")

    if AtOptionTradeOnly:
        IPADf.index = ATdf.index
        IPADf.columns.name = ATdf.columns.name
    else:
        IPADf.index = df.index
        IPADf.columns.name = df.columns.name
    IPADf.rename(columns={"Volatility": 'ImpliedVolatility'}, inplace=True)

    # We are going to want to show details about data retreived in a dataframe in the output of this function. The one line below allows us to maximise the width (column) length of cells to see all that is written within them.
    pd.options.display.max_colwidth = maxColwidth

    if graphStyle == 'simple':
        display(searchDf.iloc[(searchDf.StrikePrice-underlyingPrice).abs().argsort()[:1]])
        fig, axes = plt.subplots(ncols=1, figsize=simpleGraphSize)
        axes.plot(
            pd.DataFrame(  # Unfortunutally, Matplotlib, which is the library used here for simple graphs, require our dataframe to be in a specific format that necessitate the use of `pd.DataFrame`
                data=IPADf[['ImpliedVolatility']].ImpliedVolatility.values,
                index=IPADf[['ImpliedVolatility']].ImpliedVolatility.index),
            simpleGraphLineStyle)
        if AtOptionTradeOnly: axes.set_title(f"{instrument} Implied Volatility At Trade Only")
        else: axes.set_title(f"{instrument} Implied Volatility")
        plt.show()

    else:

        display(searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[:1]])

        IPADfGraph = IPADf[
            ['ImpliedVolatility', 'MarketValueInDealCcy',
             'RiskFreeRatePercent', 'UnderlyingPrice', 'DeltaPercent',
             'GammaPercent', 'RhoPercent', 'ThetaPercent', 'VegaPercent']]

        if debug: display(IPADfGraph)

        try:  # This is needed in case there is not enough data to calculate values for all timestamps , see https://stackoverflow.com/questions/67244912/wide-format-csv-with-plotly-express
            fig = px.line(IPADfGraph)
        except:
            if returnDfGraph:
                return IPADfGraph
            else:
                IPADfGraph = IPADfGraph[
                    ["ImpliedVolatility", "MarketValueInDealCcy",
                     "RiskFreeRatePercent", "UnderlyingPrice"]]
                fig = px.line(IPADfGraph)

        if graphStyle == 'overlay':
            fig.update_layout(
                title=instrument,
                template=graphTemplate)
            fig.for_each_trace(
                lambda t: t.update(
                    visible=True if t.name in IPADfGraph.columns[:1] else "legendonly"))
            fig.show()

        elif graphStyle == '3 graphs':
            fig = plotly.subplots.make_subplots(rows=3, cols=1)

            fig.add_trace(go.Scatter(
                x=IPADf.index, y=IPADfGraph.ImpliedVolatility,
                name='Op Imp Volatility'), row=1, col=1)
            fig.add_trace(go.Scatter(
                x=IPADf.index, y=IPADfGraph.MarketValueInDealCcy,
                name='Op Mk Pr'), row=2, col=1)
            fig.add_trace(go.Scatter(
                x=IPADf.index, y=IPADfGraph.UnderlyingPrice,
                name=f"{underlying} Undrlyg Pr"), row=3, col=1)

            fig.update(layout_xaxis_rangeslider_visible=False)
            fig.update_layout(title=IPADfGraph.columns.name)
            fig.update_layout(
                title=instrument,
                template=graphTemplate,
                autosize=False,
                width=1300,
                height=500)
            fig.show()

        else:

            print("Looks like the agrument `graphStyle` used is incorrect. Try `simple`, `overlay` or `3 graphs`")
In [77]:
ImpVolatilityCalcIPA(  # This will pick up 10 min data
    x=15,
    indexUnderlying=".SPX",  # ".SPX" or ".STOXX50E"
    callOrPut='Call',  # 'Put' or 'Call'
    dateBack=3,
    expiryYearOfInterest=datetime.now().year,
    riskFreeRate=None,
    riskFreeRateField=None,  # 'TR.FIXINGVALUE'
    timeZoneInGraph=datetime.now().astimezone(),
    maxColwidth=200,
    graphStyle='overlay',  # 'overlay', '3 graphs', 'simple'
    simpleGraphLineStyle='.-',  # 'o-'
    simpleGraphSize=(15, 5),
    graphTemplate='plotly_dark',
    debug=False,
    returnDfGraph=True,
    AtOptionTradeOnly=True)
DocumentTitle RIC StrikePrice ExchangeCode ExpiryDate UnderlyingQuoteRIC InsertDateTime RetireDate
51 OPRA S&P 500 Index Option 4070 Call May 2023 , Stock Index Cash Option, Call 4070 USD 19-May-2023, OPRA SPXWe192340700.U 4070 OPQ 2023-05-19 [.SPX] 2023-03-09 03:38:24 2023-05-23

If you're interested in running this function in a loop that uppdates every 5 seconds, the below cell is for you. I am not running it here as it is an infinite loop, meaning taht it will not stop running, and won't allow subsequent cells to run.

In [78]:
# while True:
#     # Code executed here
#     clear_output(wait=True)
#     ImpVolatilityCalcIPA(
#                 dateBack=3, indexUnderlying=".STOXX50E", callOrPut='Call',
#                 graphStyle='simple', AtOptionTradeOnly=True)
#             time.sleep(5)

Finding Expired Options¶

The code in the cell below was written expertly by Haykaz Aramyan in the article 'Functions to find Option RICs traded on different exchanges'. I wanted to introduce it towards the end of this (current) article as it uses complex Python notions such as Classes. We look into reconstructing expiered option RICs which have different nomenclatures to live ones:

Below, we put ourselves in the shoes of an analyst backtesting a strategy involving past historical Implied Volatilities. E.g.: if the average 3-business-day historical Implied Volatility of an Option contract is too high, (s)he would not consider it in his(/her) portfolio.

Somwthing to keep in mind is that no intraday price data is available for options that expired 3 months (or more) prior, therefore, when intraday data is not available, daily data will be used.

STOXX50E Usecase¶

Let's focuss on STOXX50E.

We are applying simillar logic to what was seen before, above. As a result, we'll use the same object names and simply add 2, from indexUnderlying2 onwards:

In [79]:
timeOfCalc2, indexUnderlying2 = "2022-04-01", ".STOXX50E"
timeOfCalcDatetime2 = datetime.strptime(timeOfCalc2, '%Y-%m-%d')
currentUnderlyingPrc2 = rd.get_history(
    universe=[indexUnderlying2],
    start=timeOfCalc2,  # , end: "OptDateTime"=None
    fields=["TRDPRC_1"],
    interval="tick").iloc[-1][0]
currentUnderlyingPrc2
Out[79]:
4340.81
In [80]:
if indexUnderlying2 == ".STOXX50E":
    exchangeC2, exchangeRIC2, mcalGetCalendar2 = "EUX", "STX", "EUREX"
elif indexUnderlying2 == ".SPX":
    exchangeC2, exchangeRIC2, mcalGetCalendar2 = "OPQ", "SPX", "CBOE_Futures"
exchangeC2, exchangeRIC2, mcalGetCalendar2
Out[80]:
('EUX', 'STX', 'EUREX')

Now we can ge the expiry dates for our new senario, based on a time of calculation on "2022-04-01":

In [81]:
fullDatesAtTimeOfCalc2 = Get_exp_dates(
    year=2022, days=False,
    mcal_get_calendar=mcalGetCalendar2)
fullDatesAtTimeOfCalc2
Out[81]:
{2022: [datetime.date(2022, 1, 21),
  datetime.date(2022, 2, 18),
  datetime.date(2022, 3, 18),
  datetime.date(2022, 4, 14),
  datetime.date(2022, 5, 20),
  datetime.date(2022, 6, 17),
  datetime.date(2022, 7, 15),
  datetime.date(2022, 8, 19),
  datetime.date(2022, 9, 16),
  datetime.date(2022, 10, 21),
  datetime.date(2022, 11, 18),
  datetime.date(2022, 12, 16)]}
In [82]:
fullDatesAtTimeOfCalcDatetime2 = [
    datetime(i.year, i.month, i.day)
    for i in fullDatesAtTimeOfCalc2[list(fullDatesAtTimeOfCalc2.keys())[0]]]
fullDatesAtTimeOfCalcDatetime2
Out[82]:
[datetime.datetime(2022, 1, 21, 0, 0),
 datetime.datetime(2022, 2, 18, 0, 0),
 datetime.datetime(2022, 3, 18, 0, 0),
 datetime.datetime(2022, 4, 14, 0, 0),
 datetime.datetime(2022, 5, 20, 0, 0),
 datetime.datetime(2022, 6, 17, 0, 0),
 datetime.datetime(2022, 7, 15, 0, 0),
 datetime.datetime(2022, 8, 19, 0, 0),
 datetime.datetime(2022, 9, 16, 0, 0),
 datetime.datetime(2022, 10, 21, 0, 0),
 datetime.datetime(2022, 11, 18, 0, 0),
 datetime.datetime(2022, 12, 16, 0, 0)]
In [83]:
expiryDateOfInt2 = [i for i in fullDatesAtTimeOfCalcDatetime2
                    if i > timeOfCalcDatetime2 + relativedelta(days=x)][0]
expiryDateOfInt2
Out[83]:
datetime.datetime(2022, 5, 20, 0, 0)

We'll need new functions Get_exp_month and Check_ric:

In [84]:
def Get_exp_month(exp_date, opt_type):

    # define option expiration identifiers
    ident = {
        '1':  {'exp': 'A', 'C': 'A', 'P': 'M'},
        '2':  {'exp': 'B', 'C': 'B', 'P': 'N'},
        '3':  {'exp': 'C', 'C': 'C', 'P': 'O'},
        '4':  {'exp': 'D', 'C': 'D', 'P': 'P'},
        '5':  {'exp': 'E', 'C': 'E', 'P': 'Q'},
        '6':  {'exp': 'F', 'C': 'F', 'P': 'R'},
        '7':  {'exp': 'G', 'C': 'G', 'P': 'S'},
        '8':  {'exp': 'H', 'C': 'H', 'P': 'T'},
        '9':  {'exp': 'I', 'C': 'I', 'P': 'U'},
        '10': {'exp': 'J', 'C': 'J', 'P': 'V'},
        '11': {'exp': 'K', 'C': 'K', 'P': 'W'},
        '12': {'exp': 'L', 'C': 'L', 'P': 'X'}}

    # get expiration month code for a month
    if opt_type.upper() == 'C':
        exp_month = ident[str(exp_date.month)]['C']
    elif opt_type.upper() == 'P':
        exp_month = ident[str(exp_date.month)]['P']

    return ident, exp_month
In [85]:
def Check_ric(ric, maturity, ident):
    exp_date = pd.Timestamp(maturity)

    # get start and end date for get_historical_price_summaries
    # query (take current date minus 90 days period)
    sdate = (datetime.now() - timedelta(90)).strftime('%Y-%m-%d')
    edate = datetime.now().strftime('%Y-%m-%d')

    # check if option is matured. If yes, add expiration syntax and recalculate
    # start and end date of the query (take expiration day minus 90 days period)
    if pd.Timestamp(maturity) < datetime.now():
        ric = ric + '^' + ident[str(exp_date.month)]['exp'] + str(exp_date.year)[-2:]
        sdate = (exp_date - timedelta(90)).strftime('%Y-%m-%d')
        edate = exp_date.strftime('%Y-%m-%d')

    # request option prices. Please note, there is no settle price for OPRA traded options
    fieldsRequest = ['BID', 'ASK', 'TRDPRC_1']
    if not ric.split('.')[1][0] == 'U':
        fieldsRequest.append('SETTLE')

    prices = rd.content.historical_pricing.summaries.Definition(
        ric, start=sdate, end=edate,
        interval=rd.content.historical_pricing.Intervals.DAILY,
        fields=fieldsRequest).get_data()

    return ric, prices

Now we can get EUREX RIC:

In [86]:
def Get_ric_eurex(asset, maturity, strike, opt_type):
    exp_date = pd.Timestamp(maturity)

    if asset[0] == '.':
        asset_name = asset[1:]
        if asset_name == 'FTSE':
            asset_name = 'OTUK'
        elif asset_name == 'SSMI':
            asset_name = 'OSMI'
        elif asset_name == 'GDAXI':
            asset_name = 'GDAX'
        elif asset_name == 'ATX':
            asset_name = 'FATXA'
        elif asset_name == 'STOXX50E':
            asset_name = 'STXE'
    else:
        asset_name = asset.split('.')[0]

    ident, exp_month = Get_exp_month(
        exp_date=exp_date, opt_type=opt_type)

    if type(strike) == float:
        int_part = int(strike)
        dec_part = str(str(strike).split('.')[1])[0]
    else:
        int_part = int(strike)
        dec_part = '0'

    if len(str(int(strike))) == 1:
        strike_ric = '0' + str(int_part) + dec_part
    else:
        strike_ric = str(int_part) + dec_part

    possible_rics = []
    generations = ['', 'a', 'b', 'c', 'd']
    for gen in generations:
        ric = asset_name + strike_ric + gen + exp_month + str(exp_date.year)[-1:] + '.EX'
        ric, prices = Check_ric(ric, maturity, ident)
        if prices is not None:
            return ric, prices
        else:
            possible_rics.append(ric)
    print(f'Here is a list of possible RICs {possible_rics}, however we could not find any prices for those!')
    return ric, prices

Note that this function, Get_ric_eurex, needs a round number as strike price:

In [87]:
int(round(currentUnderlyingPrc2, -2))
Out[87]:
4300
In [88]:
instrument2, instrument2Prices = Get_ric_eurex(
    asset='.STOXX50E', opt_type='P',
    maturity=expiryDateOfInt2.strftime('%Y-%m-%d'),
    strike=int(round(currentUnderlyingPrc2, -2)))
In [89]:
instrument2
Out[89]:
'STXE43000Q2.EX^E22'
In [90]:
instrument2Prices.data.df
Out[90]:
STXE43000Q2.EX^E22 BID ASK TRDPRC_1 SETTLE
Date
2022-02-21 422.2 434.3 <NA> 426.9
2022-02-22 420.1 433.1 <NA> 424.7
2022-02-23 426.6 443.0 <NA> 435.0
2022-02-24 552.2 573.1 <NA> 563.0
2022-02-25 428.4 444.1 <NA> 435.0
... ... ... ... ...
2022-05-16 611.5 643.0 628.8 627.0
2022-05-17 548.7 580.3 <NA> 565.0
2022-05-18 604.7 634.2 <NA> 618.8
2022-05-19 651.8 685.1 <NA> 669.7
2022-05-20 577.6 640.8 <NA> 600.2

63 rows × 4 columns

General Usecase¶

Above we looked at the specific usecase with EUREX, let's generalise it:

In [91]:
from typing import Tuple, Union, Dict, List, Any
In [92]:
class Option_RIC:
    """
    Option_RIC
    """

    def __init__(
        self,
        maturity: str,  # '2022-01-21'
        strike: int,
        opt_type: str,  # 'C' or 'P'
        asset: str = ".STOXX50E",
        debug: bool = False,
        topNuSearchResults: int = 100):

        # Most objects are simple to define at this stage, but soon you'll see that two of them are a little more finicky
        self.maturity = pd.Timestamp(maturity)
        self.strike = strike
        self.opt_type = opt_type
        self.debug = debug
        self.asset = asset

        response = search.Definition(
            query=asset,
            filter="SearchAllCategory eq 'Options' and Periodicity eq 'Monthly' ",
            select=' RIC, DocumentTitle, UnderlyingQuoteRIC,Periodicity, ExchangeCode',
            navigators="ExchangeCode",
            top=topNuSearchResults).get_data()
        result = response.data.raw["Navigators"]["ExchangeCode"]
        exchange_codes = []
        for i in range(len(result['Buckets'])):
            code = result['Buckets'][i]['Label']
            exchange_codes.append(code)

        self.exchange = exchange_codes


    def Check_ric(self, ric, maturity):
        """
        Support Function used within other functions in `Option_RIC` Class.
        """

        exp_date = pd.Timestamp(maturity)

        if pd.Timestamp(maturity) < datetime.now():
            sdate = (exp_date - timedelta(600)).strftime('%Y-%m-%d')
            edate = exp_date.strftime('%Y-%m-%d')
        else:
            sdate = (datetime.now() - timedelta(90)).strftime('%Y-%m-%d')
            edate = datetime.now().strftime('%Y-%m-%d')
        if self.debug:
            print(f"Check_ric's (ric, sdate, edate) = ({ric}, {sdate}, {edate})")

        # Now things are gettijng tricky.
        # Certain Expiered Options do not have 'TRDPRC_1' data historically. Some don't have 'SETTLE'. Some have both...
        # The below should capture 'SETTLE' when it is available, but 'TRDPRC_1' might still be present in these instances.
        # So we will need to build a logic to focus on the series with the most datapoints.
        if ric.split('.')[1][0] == 'U':
            prices = rd.content.historical_pricing.summaries.Definition(
                ric,  start=sdate, end=edate,
                interval=rd.content.historical_pricing.Intervals.DAILY,
                fields=['TRDPRC_1', 'BID', 'ASK']).get_data()  # Later in the code, we will pick the column that has the fewest `<NA>`s. It oculd be that 'BID' and 'TRDPRC_1' have just as many `<NA>`s; the code will pick the 1st column with the same number of `<NA>`s in this case, so we have to ask for fields in the order of importance. Here we're most interested in 'TRDPRC_1', the rest later.
        else:
            prices = rd.content.historical_pricing.summaries.Definition(
                ric,  start=sdate, end=edate,
                interval=rd.content.historical_pricing.Intervals.DAILY,
                fields=['SETTLE', 'TRDPRC_1', 'BID', 'ASK']).get_data()
        if self.debug:
            print(f"prices.data.df.isna().sum(axis=0).idxmin() = {prices.data.df.isna().sum(axis=0).idxmin()}")
            print(f"prices.data.df[prices.data.df.isna().sum(axis=0).idxmin()] = {prices.data.df[prices.data.df.isna().sum(axis=0).idxmin()]}")
        fullest_prices = pd.DataFrame(  # Deppending on which Option (and underlying) user picks, the apropriate price field changes - annoyingly. `fullest_prices` attempts to pick only the collumn - and thus the field - with fewest `NaN`s, which ought to be the correct field/column.
            columns=[prices.data.df.isna().sum(axis=0).idxmin()],  # This is the name of the column with fewest NAs
            data=prices.data.df[prices.data.df.isna().sum(axis=0).idxmin()])  # This is the column with fewest NAs
        return ric, prices, fullest_prices

    def Get_asset_and_exchange(self):
        """
        Support Function used within other functions in `Option_RIC` Class.
        """

        asset_in_ric: Dict[str, Dict[Union[str, Dict[str, str]]]] = {
            'SSMI':     {'EUX': 'OSMI'},
            'GDAXI':    {'EUX': 'GDAX'},
            'ATX':      {'EUX': 'FATXA'},
            'STOXX50E': {'EUX': 'STXE'},
            'FTSE':     {'IEU': 'LFE', 'EUX': 'OTUK'},
            'N225':     {'OSA': 'JNI'},
            'TOPX':     {'OSA': 'JTI'}}

        asset_exchange: Dict[str, str] = {}
        if self.asset[0] != '.':
            asset: str = self.asset.split('.')[0]
        else:
            asset: str = self.asset[1:]
        for exch in self.exchange:
            if asset in asset_in_ric:
                asset_exchange[exch] = asset_in_ric[asset][exch]
            else:
                asset_exchange[exch] = asset
        return asset_exchange

    def Get_strike(self, exch):

        if exch == 'OPQ':
            if type(self.strike) == float:
                int_part = int(self.strike)
                dec_part = str(str(self.strike).split('.')[1])
            else:
                int_part = int(self.strike)
                dec_part = '00'
            if int(self.strike) < 10:
                strike_ric = '00' + str(int_part) + dec_part
            elif int_part >= 10 and int_part < 100:
                strike_ric = '0' + str(int_part) + dec_part
            elif int_part >= 100 and int_part < 1000:
                strike_ric = str(int_part) + dec_part
            elif int_part >= 1000 and int_part < 10000:
                strike_ric = str(int_part) + '0'
            elif int_part >= 10000 and int_part < 20000:
                strike_ric = 'A' + str(int_part)[-4:]
            elif int_part >= 20000 and int_part < 30000:
                strike_ric = 'B' + str(int_part)[-4:]
            elif int_part >= 30000 and int_part < 40000:
                strike_ric = 'C' + str(int_part)[-4:]
            elif int_part >= 40000 and int_part < 50000:
                strike_ric = 'D' + str(int_part)[-4:]

        elif exch == 'HKG' or exch == 'HFE':
            if self.asset[0] == '.':
                strike_ric = str(int(self.strike))
            else:
                strike_ric = str(int(self.strike * 100))

        elif exch == 'OSA':
            strike_ric = str(self.strike)[:3]

        elif exch == 'EUX' or exch == 'IEU':
            if type(self.strike) == float and len(str(int(self.strike))) == 1:
                int_part = int(self.strike)
                dec_part = str(str(self.strike).split('.')[1])[0]
                strike_ric = '0' + str(int_part) + dec_part
            elif (len(str(int(self.strike))) > 1 and exch == 'EUX'):
                strike_ric = str(int(self.strike)) + '0'
            elif (len(str(int(self.strike))) == 2 and exch == 'IEU'):
                strike_ric = '0' + str(int(self.strike))
            elif len(str(int(self.strike))) > 2 and exch == 'IEU':
                strike_ric = str(int(self.strike))

        return strike_ric

    def Get_exp_month(self, exchange):
        """
        Support Function used within other functions in `Option_RIC` Class.
        """
        ident_opra = {
            '1': {'exp': 'A', 'C_bigStrike': 'a', 'C_smallStrike': 'A',
                  'P_bigStrike': 'm', 'P_smallStrike': 'M'},
            '2': {'exp': 'B', 'C_bigStrike': 'b', 'C_smallStrike': 'B',
                  'P_bigStrike': 'n', 'P_smallStrike': 'N'},
            '3': {'exp': 'C', 'C_bigStrike': 'c', 'C_smallStrike': 'C',
                  'P_bigStrike': 'o', 'P_smallStrike': 'O'},
            '4': {'exp': 'D', 'C_bigStrike': 'd', 'C_smallStrike': 'D',
                  'P_bigStrike': 'p', 'P_smallStrike': 'P'},
            '5': {'exp': 'E', 'C_bigStrike': 'e', 'C_smallStrike': 'E',
                  'P_bigStrike': 'q', 'P_smallStrike': 'Q'},
            '6': {'exp': 'F', 'C_bigStrike': 'f', 'C_smallStrike': 'F',
                  'P_bigStrike': 'r', 'P_smallStrike': 'R'},
            '7': {'exp': 'G', 'C_bigStrike': 'g', 'C_smallStrike': 'G',
                  'P_bigStrike': 's', 'P_smallStrike': 'S'},
            '8': {'exp': 'H', 'C_bigStrike': 'h', 'C_smallStrike': 'H',
                  'P_bigStrike': 't', 'P_smallStrike': 'T'},
            '9': {'exp': 'I', 'C_bigStrike': 'i', 'C_smallStrike': 'I',
                  'P_bigStrike': 'u', 'P_smallStrike': 'U'},
            '10': {'exp': 'J', 'C_bigStrike': 'j', 'C_smallStrike': 'J',
                   'P_bigStrike': 'v', 'P_smallStrike': 'V'},
            '11': {'exp': 'K', 'C_bigStrike': 'k', 'C_smallStrike': 'K',
                   'P_bigStrike': 'w', 'P_smallStrike': 'W'},
            '12': {'exp': 'L', 'C_bigStrike': 'l', 'C_smallStrike': 'L',
                   'P_bigStrike': 'x', 'P_smallStrike': 'X'}}

        ident_all = {
            '1':  {'exp': 'A', 'C': 'A', 'P': 'M'},
            '2':  {'exp': 'B', 'C': 'B', 'P': 'N'},
            '3':  {'exp': 'C', 'C': 'C', 'P': 'O'},
            '4':  {'exp': 'D', 'C': 'D', 'P': 'P'},
            '5':  {'exp': 'E', 'C': 'E', 'P': 'Q'},
            '6':  {'exp': 'F', 'C': 'F', 'P': 'R'},
            '7':  {'exp': 'G', 'C': 'G', 'P': 'S'},
            '8':  {'exp': 'H', 'C': 'H', 'P': 'T'},
            '9':  {'exp': 'I', 'C': 'I', 'P': 'U'},
            '10': {'exp': 'J', 'C': 'J', 'P': 'V'},
            '11': {'exp': 'K', 'C': 'K', 'P': 'W'},
            '12': {'exp': 'L', 'C': 'L', 'P': 'X'}}

        if exchange == 'OPQ':
            if self.strike > 999.999:
                exp_month_code = ident_opra[str(
                    self.maturity.month)][self.opt_type + '_bigStrike']
            else:
                exp_month_code = ident_opra[str(
                    self.maturity.month)][self.opt_type + '_smallStrike']
        else:
            exp_month_code = ident_all[str(self.maturity.month)][self.opt_type]

        if self.maturity < datetime.now():
            expired = '^' + \
                ident_all[str(self.maturity.month)]['exp'] + \
                str(self.maturity.year)[-2:]
        else:
            expired = ''

        return exp_month_code, expired

    def RIC_prices(self, ric, ricPrices):
        if self.debug:
            print(f"ricPrices's ric: {ric}")
            print(f"self.maturity: {self.maturity}")
        ric, prices, full_prices = self.Check_ric(ric, self.maturity)
        if prices is not None:
            valid_ric = {ric: prices, f'{ric} fullest prices': full_prices}
            ricPrices['valid_ric'].append(valid_ric)
        else:
            ricPrices['potential_rics'].append(ric)

        return ricPrices

    def Construct_RIC(self):
        asset_exchange = self.Get_asset_and_exchange()
        supported_exchanges = ['OPQ', 'IEU', 'EUX', 'HKG', 'HFE', 'OSA']
        ricPrices = {'valid_ric': [], 'potential_rics': []}
        for exchange, asset in asset_exchange.items():
            if exchange in supported_exchanges:
                strike_ric = self.Get_strike(exchange)
                exp_month_code, expired = self.Get_exp_month(exchange)

                if exchange == 'OPQ':
                    ric = asset + exp_month_code + \
                        str(self.maturity.day) + \
                        str(self.maturity.year)[-2:] + \
                        strike_ric + '.U' + expired
                    ricPrices = self.RIC_prices(ric, ricPrices)

                elif exchange == 'HKG' or exchange == 'HFE':
                    gen_len = ['0', '1', '2', '3']
                    if exchange == 'HFE':
                        gen_len = ['']
                    for i in gen_len:
                        exchs = {'HKG': {'exch_code': '.HK', 'gen': str(i)},
                                 'HFE': {'exch_code': '.HF', 'gen': ''}}
                        ric = asset + strike_ric + exchs[exchange]['gen'] + exp_month_code + str(
                            self.maturity.year)[-1:] + exchs[exchange]['exch_code'] + expired
                        ricPrices = self.RIC_prices(ric, ricPrices)

                elif exchange == 'OSA':
                    for jnet in ['', 'L', 'R']:
                        if self.asset[0] == '.':
                            ric = asset + jnet + strike_ric + exp_month_code + \
                                str(self.maturity.year)[-1:] + '.OS' + expired
                            ricPrices = self.RIC_prices(ric, ricPrices)
                        else:
                            for gen in ['Y', 'Z', 'A', 'B', 'C']:
                                ric = asset + jnet + gen + strike_ric + exp_month_code + \
                                    str(self.maturity.year)[-1:] + \
                                    '.OS' + expired
                                ricPrices = self.RIC_prices(ric, ricPrices)

                elif exchange == 'EUX' or exchange == 'IEU':
                    exchs = {'EUX': '.EX', 'IEU': '.L'}
                    for gen in ['', 'a', 'b', 'c', 'd']:
                        ric = asset + strike_ric + gen + exp_month_code + \
                            str(self.maturity.year)[-1:] + \
                            exchs[exchange] + expired
                        if self.debug: print(f"Construct_RIC's ric: {ric}")
                        try:
                            ricPrices = self.RIC_prices(ric, ricPrices)
                        except:
                            if self.debug:
                                print("Error for self.RIC_prices(ric, ricPrices)")
            else:
                print(f'The {exchange} exchange is not supported yet')
        return ricPrices

General Use Case Test with HSI¶

Let's try it all again with the senario where we calculate values as of '2023-02-01' for index 'Hang Seng Index':

In [93]:
timeOfCalc3 = "2023-02-01"
indexUnderlying3 = ".HSI"
In [94]:
timeOfCalcDatetime2 = datetime.strptime(timeOfCalc3, '%Y-%m-%d')
In [95]:
HSI_test1 = Option_RIC(
    maturity=timeOfCalc3,
    strike=20200,  # could be: `int(round(rd.get_history(universe=[indexUnderlying3], start=timeOfCalc3, fields=["TRDPRC_1"], interval="tick").iloc[-1][0], -2))` but this would get the price right now, which may not be appropriate for a 'past'/expired option.
    opt_type='P',
    asset=indexUnderlying3,
    debug=False)
HSI_test2 = HSI_test1.Construct_RIC()
In [96]:
list(HSI_test2['valid_ric'][0].keys())[0]
Out[96]:
'HSI20200N3.HF^B23'
In [97]:
HSI_test3 = HSI_test2['valid_ric'][0][list(HSI_test2['valid_ric'][0].keys())[0]].data.df.head()
HSI_test3
Out[97]:
HSI20200N3.HF^B23 SETTLE TRDPRC_1 BID ASK
Date
2022-11-08 3569 <NA> <NA> <NA>
2022-11-09 3713 <NA> <NA> <NA>
2022-11-10 4025 <NA> <NA> <NA>
2022-11-11 2939 <NA> <NA> <NA>
2022-11-14 2665 <NA> <NA> <NA>
In [98]:
print(list(HSI_test2['valid_ric'][0].keys())[1])
HSI_test4 = HSI_test2['valid_ric'][0][list(HSI_test2['valid_ric'][0].keys())[1]]
HSI_test4.head()
HSI20200N3.HF^B23 fullest prices
Out[98]:
SETTLE
Date
2022-11-08 3569
2022-11-09 3713
2022-11-10 4025
2022-11-11 2939
2022-11-14 2665

General Use Case Test with SPX¶

Now let's look at SPX:

In [99]:
timeOfCalc3 = '2022-02-10'
indexUnderlying3 = ".SPX"
timeOfCalcDatetime3 = datetime.strptime(timeOfCalc3, '%Y-%m-%d')
currentUnderlyingPrc3 = rd.get_history(
    universe=[indexUnderlying3],
    start=timeOfCalc3,  # , end: "OptDateTime"=None
    fields=["TRDPRC_1"],
    interval="tick").iloc[-1][0]
currentUnderlyingPrc3
Out[99]:
4071.63
In [100]:
SPX_test2 = Option_RIC(
    maturity='2022-01-21',
    strike=int(round(currentUnderlyingPrc3, -2)),
    opt_type='P',
    asset=indexUnderlying3,
    debug=False)
SPX_test2 = SPX_test2.Construct_RIC()
In [101]:
list(SPX_test2['valid_ric'][0].keys())[0]
Out[101]:
'SPXm212241000.U^A22'
In [102]:
SPX_test2['valid_ric'][0][list(SPX_test2['valid_ric'][0].keys())[0]].data.df
Out[102]:
SPXm212241000.U^A22 TRDPRC_1 BID ASK
Date
2020-12-21 <NA> 546.9 560.0
2020-12-22 <NA> 555.0 558.9
2020-12-23 <NA> 549.2 552.8
2020-12-24 <NA> 535.1 540.6
2020-12-28 <NA> 509.9 513.5
... ... ... ...
2022-01-13 0.84 0.7 0.85
2022-01-14 0.45 0.4 0.55
2022-01-18 0.5 0.45 0.55
2022-01-19 0.38 0.35 0.45
2022-01-20 0.15 0.15 0.2

273 rows × 3 columns

In [103]:
print(list(SPX_test2['valid_ric'][0].keys())[1])
SPX_test2['valid_ric'][0][list(SPX_test2['valid_ric'][0].keys())[1]]
SPXm212241000.U^A22 fullest prices
Out[103]:
BID
Date
2020-12-21 546.9
2020-12-22 555.0
2020-12-23 549.2
2020-12-24 535.1
2020-12-28 509.9
... ...
2022-01-13 0.7
2022-01-14 0.4
2022-01-18 0.45
2022-01-19 0.35
2022-01-20 0.15

273 rows × 1 columns

Implied Volatility and Greeks of Expired Options (Historical Daily series)¶

The most granular Historical Options' price data kept are daily time-series. This daily data is captured buy the above Option_RIC().Construct_RIC() function. Some Options' historical price data is most "wholesome" (in this case, "has the least amount of NaNs" - Not a Number) under the field name TRDPRC_1, some under SETTLE. While our preference - ceteris paribus (all else equal) - is TRDPRC_1, more "wholesome" data-sets are still preferable, so the "fullest prices" in Option_RIC().Construct_RIC() picks the series with fewest NaNs.

In [104]:
HSI_underlying_RIC = '.HSI'
In [105]:
list(HSI_test2['valid_ric'][0].keys())[0]
Out[105]:
'HSI20200N3.HF^B23'
In [106]:
HSICurr = rd.get_data(
    universe=HSI_underlying_RIC,
    fields=["CF_CURR"])
HSICurr
Out[106]:
Instrument CF_CURR
0 .HSI 344
In [107]:
# Now we will try and find the strike price for the option found.
# If we were to use the logic above, in the function `Option_RIC().Get_strike()`, we would find a list of all the possible prices for options on this underlying, which is too large of a group.
# We will use the name of the outputed option, which includes the strike:
import re  # native Python library that allows us to manipulate strings
hist_opt_found_strk_pr = re.findall(
    '(\d+|[A-Za-z]+)',  # This will split the string out of its numerical and non-numerical characters.
    list(HSI_test2['valid_ric'][0].keys())[0])[1][0:-1]  # `[1]` here skips through 'HSI' and to the numbers. `[0:-1]` here is there to ignore the last digit, which is not part of the strike price.
hist_opt_found_strk_pr
Out[107]:
'2020'
In [108]:
hk_rf = 100 - rd.get_history(
    universe=['HK3MT=RR'],  # HK10YGB=EODF, HKGOV3MZ=R, HK3MT=RR
    fields=['TR.MIDPRICE'],
    start=HSI_test4.index[0].strftime('%Y-%m-%d'),
    end=HSI_test4.index[-1].strftime('%Y-%m-%d'))  # .iloc[::-1]  # `.iloc[::-1]` is here so that the resulting data-frame is the same order as `HSI_test5` so we can merge them later
hk_rf
Out[108]:
HK3MT=RR Mid Price
Date
2022-11-08 0.7445
2022-11-09 0.7365
2022-11-10 0.731
2022-11-11 0.713
2022-11-14 0.71
... ...
2023-01-26 0.577
2023-01-27 0.532
2023-01-30 0.523
2023-01-31 0.5745
2023-02-01 0.5925

62 rows × 1 columns

In [109]:
HSI_test5 = pd.merge(
    HSI_test4, hk_rf,
    left_index=True, right_index=True)
HSI_test5 = HSI_test5.rename(
    columns={"SETTLE": "OptionPrice", "Mid Price": "RfRatePrct"})
HSI_test5.head()
Out[109]:
OptionPrice RfRatePrct
Date
2022-11-08 3569 0.7445
2022-11-09 3713 0.7365
2022-11-10 4025 0.731
2022-11-11 2939 0.713
2022-11-14 2665 0.71
In [110]:
hist_HSI_undrlying_pr = rd.get_history(
    universe=[HSI_underlying_RIC],
    fields=["TRDPRC_1"],
    # interval="1D",
    start=HSI_test4.index[0].strftime('%Y-%m-%d'),
    end=HSI_test4.index[-1].strftime('%Y-%m-%d'))  # .iloc[::-1]  # `.iloc[::-1]` is here so that the resulting data-frame is the same order as `HSI_test5` so we can merge them later
hist_HSI_undrlying_pr.head(2)
Out[110]:
.HSI TRDPRC_1
Date
2022-11-09 16358.52
2022-11-10 16081.04
In [111]:
HSI_test6 = pd.merge(HSI_test5, hist_HSI_undrlying_pr,
                     left_index=True, right_index=True)
HSI_test6 = HSI_test6.rename(
    columns={"TRDPRC_1": "UndrlyingPr"})
HSI_test6.columns.name = list(HSI_test2['valid_ric'][0].keys())[0]  # This is to name the data-frame. Technically it names the column set, but the're all just for one instrument, so it's the same difference.
HSI_test6.head(2)
Out[111]:
HSI20200N3.HF^B23 OptionPrice RfRatePrct UndrlyingPr
Date
2022-11-09 3713 0.7365 16358.52
2022-11-10 4025 0.731 16081.04
In [112]:
list(HSI_test2['valid_ric'][0].keys())[0]
Out[112]:
'HSI20200N3.HF^B23'
In [113]:
HSI_test_start = HSI_test2['valid_ric'][0][list(HSI_test2['valid_ric'][0].keys())[1]].index[0]
# HSI_test_end = HSI_test2['valid_ric'][0][list(HSI_test2['valid_ric'][0].keys())[1]].index[-1]
(HSI_test1.maturity - HSI_test_start).days/365  # Expecting this to `YearsToExpiry` which is in 'DaysToExpiry / 365'.
Out[113]:
0.2328767123287671
In [114]:
HSI_test2['valid_ric'][0][
    f"{list(HSI_test2['valid_ric'][0].keys())[0]} fullest prices"].head(2)
Out[114]:
SETTLE
Date
2022-11-08 3569
2022-11-09 3713
In [115]:
HSI_test2['valid_ric'][0][list(HSI_test2['valid_ric'][0].keys())[0]].data.df.head(2)
Out[115]:
HSI20200N3.HF^B23 SETTLE TRDPRC_1 BID ASK
Date
2022-11-08 3569 <NA> <NA> <NA>
2022-11-09 3713 <NA> <NA> <NA>
In [116]:
# rd.content.historical_pricing.summaries.Definition(
#     list(HSI_test2['valid_ric'][0].keys())[0],
#     start='2022-01-01',
#     end='2024-02-27',
#     interval=rd.content.historical_pricing.Intervals.DAILY,
#     fields=['TRDPRC_1', 'BID', 'ASK', 'EXPIR_DATE', 'TR.FOFirstTradingDate']).get_data().data.df
In [117]:
HSI_test2_exp_date = HSI_test1.maturity.strftime('%Y-%m-%d')
HSI_test2_exp_date
Out[117]:
'2023-02-01'

Content Layer¶

In [118]:
# df = [
#     option.Definition(
#         underlying_type=option.UnderlyingType.ETI,
#         buy_sell='Buy',
#         instrument_code=list(HSI_test2['valid_ric'][0].keys())[0],  # 'STXE42000D3.EX' #  'HSI19300N3.HF^B23',  # list(HSI_test2['valid_ric'][0].keys())[0],
#         strike=float(hist_opt_found_strk_pr),
#         pricing_parameters=option.PricingParameters(
#             market_value_in_deal_ccy=float(HSI_test6['OptionPrice'][i]),
#             risk_free_rate_percent=float(HSI_test6['RfRatePrct'][i]),
#             underlying_price=float(HSI_test6['UndrlyingPr'][i]),
#             pricing_model_type='BlackScholes',
#             volatility_type='Implied',
#             underlying_time_stamp='Default',
#             report_ccy='HKD'
#         ))
#     for i in range(len(HSI_test6.index))]
In [119]:
help(option.TimeStamp.SETTLE)
Help on TimeStamp in module refinitiv.data.content.ipa._enums._time_stamp object:

class TimeStamp(enum.Enum)
 |  TimeStamp(value, names=None, *, module=None, qualname=None, type=None, start=1)
 |  
 |  An enumeration.
 |  
 |  Method resolution order:
 |      TimeStamp
 |      enum.Enum
 |      builtins.object
 |  
 |  Data and other attributes defined here:
 |  
 |  CLOSE = <TimeStamp.CLOSE: 'Close'>
 |  
 |  CLOSE_LONDON5_PM = <TimeStamp.CLOSE_LONDON5_PM: 'CloseLondon5PM'>
 |  
 |  CLOSE_NEW_YORK5_PM = <TimeStamp.CLOSE_NEW_YORK5_PM: 'CloseNewYork5PM'>
 |  
 |  CLOSE_TOKYO5_PM = <TimeStamp.CLOSE_TOKYO5_PM: 'CloseTokyo5PM'>
 |  
 |  DEFAULT = <TimeStamp.DEFAULT: 'Default'>
 |  
 |  OPEN = <TimeStamp.OPEN: 'Open'>
 |  
 |  SETTLE = <TimeStamp.SETTLE: 'Settle'>
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from enum.Enum:
 |  
 |  name
 |      The name of the Enum member.
 |  
 |  value
 |      The value of the Enum member.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from enum.EnumMeta:
 |  
 |  __members__
 |      Returns a mapping of member name->value.
 |      
 |      This mapping lists all enum members, including aliases. Note that this
 |      is a read-only view of the internal mapping.

In [120]:
option.EtiUnderlyingDefinition(
            instrument_code=HSI_underlying_RIC).get_dict()
Out[120]:
{'instrumentCode': '.HSI'}
In [121]:
hist_daily_universe_l = [
    option.Definition(
        instrument_tag='Option',
        # `instrument_code` is ambiguous here because we ought to use our expired option RIC, but when we use it we automatically get ann NaNs back. Putting in a live option's RIC resolves the problem, but we're not investigating a live option... So you can simply not define it; which is what we're doing here.
        # instrument_code='STXE42000D3.EX',  # 'STXE42000D3.EX' #  'HSI19300N3.HF^B23',  # list(HSI_test2['valid_ric'][0].keys())[0],
        strike=float(hist_opt_found_strk_pr),
        buy_sell='Buy',
        call_put='Call',
        exercise_style='AMER',  # 'EURO'
        end_date=HSI_test2_exp_date,
        lot_size=float(1),
        deal_contract=int(1),
        time_zone_offset=int(0),
        underlying_type=option.UnderlyingType.ETI,
        underlying_definition=option.EtiUnderlyingDefinition(
            instrument_code=HSI_underlying_RIC),
        # tenor=str((HSI_test1.maturity - HSI_test_start).days/365),  # Expecting this to `YearsToExpiry` which is in 'DaysToExpiry / 365'.
        # notional_ccy='HKD',
        # notional_amount=,
        # asian_definition=,
        # barrier_definition=,
        # binary_definition=,
        # double_barrier_definition=,
        # double_binary_definition=,
        # dual_currency_definition=,
        # forward_start_definition=,
        # underlying_definition=,
        # delivery_date=HSI_test1.maturity.strftime('%Y-%m-%d'),
        # cbbc_definition=,
        # double_barriers_definition=,
        # end_date_time=,
        # offset=,
        # extended_params=,
        pricing_parameters=option.PricingParameters(
            valuation_date=HSI_test6.index[i].strftime('%Y-%m-%d'),
            report_ccy='HKD',
            pricing_model_type='BlackScholes',
            # dividend_type=None,  # APPARENTLY NOT APPLICABLE IN IPA/QA IN RD LIB. None, 'ForecastTable', 'HistoricalYield', 'ForecastYield', 'ImpliedYield', or 'ImpliedTable'.
            # dividend_yield_percent=,
            market_value_in_deal_ccy=float(HSI_test6['OptionPrice'][i]),
            # volatility_percent=,  # The degree of the underlying asset's price variations over a specified time period, used for the option pricing. the value is expressed in percentages. it is used to compute marketvalueindealccy.if marketvalueindealccy is defined, volatilitypercent is not taken into account. optional. by default, it is computed from marketvalueindealccy. if volsurface fails to return a volatility, it defaults to '20'.
            risk_free_rate_percent=float(HSI_test6['RfRatePrct'][i]),
            underlying_price=float(HSI_test6['UndrlyingPr'][i]),
            volatility_type='Implied', # 'option.OptionVolatilityType.IMPLIED,  # option.OptionVolatilityType.IMPLIED, 'Implied'
            option_price_side='Last', # option.PriceSide.LAST,  # 'bid', 'ask', 'mid','last', option.PriceSide.LAST
            underlying_time_stamp='Settle', # option.TimeStamp.SETTLE,  # 'Close', 'Default',
            underlying_price_side='Last' # option.PriceSide.LAST
            # volatility_model=option.VolatilityModel.SVI
        ))
    for i in range(len(HSI_test6.index))]
In [122]:
# hist_daily_universe_l = [
#     option.Definition(
#         underlying_type=option.UnderlyingType.ETI,
#         buy_sell='Buy',
#         instrument_code=list(HSI_test2['valid_ric'][0].keys())[0],  # 'STXE42000D3.EX' #  'HSI19300N3.HF^B23',  # list(HSI_test2['valid_ric'][0].keys())[0],
#         strike=float(hist_opt_found_strk_pr),
#         pricing_parameters=option.PricingParameters(
#             valuation_date=HSI_test6.index[i].strftime('%Y-%m-%d'),
#             market_value_in_deal_ccy=float(HSI_test6['OptionPrice'][i]),
#             risk_free_rate_percent=float(HSI_test6['RfRatePrct'][i]),
#             underlying_price=float(HSI_test6['UndrlyingPr'][i]),
#             pricing_model_type='BlackScholes',
#             volatility_type='Implied',
#             underlying_time_stamp='Default',
#             report_ccy='HKD'
#         ))
#     for i in range(len(HSI_test6.index))]
In [123]:
batchOf = 100
for i, j in enumerate(Chunks(hist_daily_universe_l, batchOf)):
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(hist_daily_universe_l, 100)])} started")
    # Example request with Body Parameter - Symbology Lookup
    response6 = rdf.Definitions(universe=j, fields=requestFields)
    response6 = response6.get_data()
    if i == 0:
        response6df = response6.data.df
    else:
        response6df = response6df.append(response6.data.df, ignore_index=True)
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(hist_daily_universe_l, 100)])} ended")
Batch of 61 requests no. 1/1 started
Batch of 61 requests no. 1/1 ended
In [124]:
response6df
Out[124]:
MarketValueInDealCcy RiskFreeRatePercent UnderlyingPrice PricingModelType DividendType UnderlyingTimeStamp ReportCcy VolatilityType Volatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
0 3713 0.7365 16358.52 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
1 4025 0.731 16081.04 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
2 2939 0.713 17325.66 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
3 2665 0.71 17619.71 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
4 2212 0.7965 18343.12 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
56 57 0.577 22566.78 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
57 44 0.532 22688.9 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
58 90 0.523 22069.73 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
59 101 0.5745 21842.33 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
60 74 0.5925 22072.18 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN

61 rows × 14 columns

In [125]:
HSIdf = response6df.copy()
HSIdf.rename(columns={"Volatility": 'ImpliedVolatility'}, inplace=True)
HSIdf
Out[125]:
MarketValueInDealCcy RiskFreeRatePercent UnderlyingPrice PricingModelType DividendType UnderlyingTimeStamp ReportCcy VolatilityType ImpliedVolatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
0 3713 0.7365 16358.52 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
1 4025 0.731 16081.04 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
2 2939 0.713 17325.66 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
3 2665 0.71 17619.71 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
4 2212 0.7965 18343.12 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
56 57 0.577 22566.78 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
57 44 0.532 22688.9 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
58 90 0.523 22069.73 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
59 101 0.5745 21842.33 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
60 74 0.5925 22072.18 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN

61 rows × 14 columns

In [126]:
HSIdf[5:40]
Out[126]:
MarketValueInDealCcy RiskFreeRatePercent UnderlyingPrice PricingModelType DividendType UnderlyingTimeStamp ReportCcy VolatilityType ImpliedVolatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
5 2269 0.7945 18256.48 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
6 2379 0.786 18045.66 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
7 2392 0.778 17992.54 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
8 2625 0.792 17655.91 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
9 2769 0.9705 17424.41 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
10 2667 0.9815 17523.81 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
11 2565 0.978 17660.9 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
12 2632 0.938 17573.58 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
13 2859 0.978 17297.94 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
14 2149 1.1655 18204.68 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
15 1944 1.17 18597.23 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
16 1823 1.159 18736.44 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
17 1821 1.118 18675.35 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
18 1355 1.146 19518.29 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
19 1351 1.2555 19441.18 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
20 1668 1.095 18814.82 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
21 1298 1.074 19450.23 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
22 1089 1.042 19900.87 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
23 1354 1.083 19463.63 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
24 1217 1.1585 19596.2 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
25 1137 1.11 19673.45 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
26 1289 1.078 19368.59 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
27 1290 0.991 19450.67 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
28 1247 0.991 19352.81 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
29 1469 0.9335 19094.8 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
30 1407 0.7945 19160.49 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
31 1073 0.81 19679.22 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
32 1156 0.6465 19593.06 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
33 1156 0.6465 19593.06 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
34 1156 0.6465 19593.06 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
35 896 0.6565 19898.91 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
36 1036 0.666 19741.14 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
37 1008 0.678 19781.41 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
38 1008 0.678 19781.41 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN
39 847 0.7605 20145.29 BlackScholes HistoricalYield Settle HKD Calculated <NA> NaN NaN NaN NaN NaN

Delivery Layer¶

In [127]:
HSI_test6_del_lay_list = [
        {
          "instrumentType": "Option",
          "instrumentDefinition": {
            "buySell": "Buy",
            "underlyingType": "Eti",
            "instrumentCode": list(HSI_test2['valid_ric'][0].keys())[0],  # "instrumentCode": None,
            "strike": float(hist_opt_found_strk_pr),
          },
          "pricingParameters": {
            "marketValueInDealCcy": float(HSI_test6['OptionPrice'][i]),
            "riskFreeRatePercent": float(HSI_test6['RfRatePrct'][i]),
            "underlyingPrice": float(HSI_test6['UndrlyingPr'][i]),
            "pricingModelType": "BlackScholes",
            "dividendType": "ImpliedYield",
            "volatilityType": "Implied",
            "underlyingTimeStamp": "Default",
            "reportCcy": "HKD"
          }
        }
      for i in range(len(HSI_test6.index))]
In [128]:
batchOf = 100
for i, j in enumerate(Chunks(HSI_test6_del_lay_list, batchOf)):
    print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(HSI_test6_del_lay_list, batchOf)]))} started")
    # Example request with Body Parameter - Symbology Lookup
    request_definition = rd.delivery.endpoint_request.Definition(
        method=rd.delivery.endpoint_request.RequestMethod.POST,
        url='https://api.refinitiv.com/data/quantitative-analytics/v1/financial-contracts',
        body_parameters={"fields": requestFields,
                         "outputs": ["Data", "Headers"],
                         "universe": j})

    response8 = request_definition.get_data()
    headers_name = [h['name'] for h in response8.data.raw['headers']]

    if i == 0:
        response8df = pd.DataFrame(
            data=response8.data.raw['data'], columns=headers_name)
        # print({"fields": requestFields,
        #        "outputs": ["Data", "Headers"],
        #        "universe": j})
    else:
        _response8df = pd.DataFrame(
            data=response8.data.raw['data'], columns=headers_name)
        response8df = response8df.append(_response8df, ignore_index=True)
    # display(_response8df)
    print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(HSI_test6_del_lay_list, batchOf)]))} ended")
Batch of 100 requests no. 1/1 started
Batch of 100 requests no. 1/1 ended
In [129]:
response8df
Out[129]:
MarketValueInDealCcy RiskFreeRatePercent UnderlyingPrice PricingModelType DividendType UnderlyingTimeStamp ReportCcy VolatilityType Volatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
0 None None None None None None None None None None None None None None
1 None None None None None None None None None None None None None None
2 None None None None None None None None None None None None None None
3 None None None None None None None None None None None None None None
4 None None None None None None None None None None None None None None
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
56 None None None None None None None None None None None None None None
57 None None None None None None None None None None None None None None
58 None None None None None None None None None None None None None None
59 None None None None None None None None None None None None None None
60 None None None None None None None None None None None None None None

61 rows × 14 columns

Creating a class with PEP 3107 (a.k.a.: Type Hints)¶

We are now going to look into using PEP 3107 (and PEP 484) (and some decorators). In line with PEP, I will also now use PEP8 naming conventions.

In [130]:
import nb_mypy  # !pip3 install nb_mypy --trusted-host pypi.org # https://pypi.org/project/nb-mypy/ # https://gitlab.tue.nl/jupyter-projects/nb_mypy/-/blob/master/Nb_Mypy.ipynb
In [131]:
%load_ext nb_mypy
Version 1.0.4
In [132]:
%reload_ext nb_mypy
Version 1.0.4
In [133]:
%nb_mypy On
In [134]:
%nb_mypy DebugOff
In [135]:
# %nb_mypy unknown
In [136]:
from datetime import date as dtdate
import pandas_market_calendars as mcal  # See `https://github.com/rsheftel/pandas_market_calendars/blob/master/examples/usage.ipynb` for info on this market calendar library
from typing import Tuple, Union, Dict, List, Any
import numpy as np
import calendar
from __future__ import annotations  # This native library allows us to use not-yet-fully-deffined classes as a Type Hint when inside that class.
In [137]:
import refinitiv.data as rd  # This is LSEG's Data and Analytics' API wrapper, called the Refinitiv Data Library for Python.
from refinitiv.data.content import historical_pricing  # We will use this Python Class in `rd` to show the Implied Volatility data already available before our work.
from refinitiv.data.content import search  # We will use this Python Class in `rd` to fid the instrument we are after, closest to At The Money.
from refinitiv.data.content.ipa.financial_contracts import option  # We're going to need thtis to use the content layer of the RD library and the calculators of greeks and Impl Volat in IPA & ETI
import refinitiv.data.content.ipa.financial_contracts as rdf  # We're going to need thtis to use the content layer of the RD library and the calculators of greeks and Impl Volat in Instrument Pricing Analytics (IPA) and Exchange Traded Instruments (ETI)

import numpy as np  # We need `numpy` for mathematical and array manipilations.
import pandas as pd  # We need `pandas` for datafame and array manipilations.
import calendar  # We use `calendar` to identify holidays and maturity dates of intruments of interest.
import pytz  # We use `pytz` to manipulate time values aiding `calendar` library. to import its types, you might need to run `!python3 -m pip install types-pytz`
import pandas_market_calendars as mcal  # Used to identify holidays. See `https://github.com/rsheftel/pandas_market_calendars/blob/master/examples/usage.ipynb` for info on this market calendar library
from datetime import datetime, timedelta, timezone  # We use these to manipulate time values
from dateutil.relativedelta import relativedelta  # We use `relativedelta` to manipulate time values aiding `calendar` library.

# `plotly` is a library used to render interactive graphs:
import plotly
import plotly.graph_objects as go
import plotly.express as px  # This is just to see the implied vol graph when that field is available
import matplotlib.pyplot as plt  # We use `matplotlib` to just in case users do not have an environment suited to `plotly`.
from IPython.display import display, clear_output  # We use `clear_output` for users who wish to loop graph production on a regular basis.

# Let's authenticate ourseves to LSEG's Data and Analytics service, Refinitiv:
try:  # The following libraries are not available in Codebook, thus this try loop
    rd.open_session(config_name="C:\\Example.DataLibrary.Python-main\\Example.DataLibrary.Python-main\\Configuration\\refinitiv-data.config.json")
    rd.open_session("desktop.workspace")
except:
    rd.open_session()
print(f"Here we are using the refinitiv Data Library version {rd.__version__}")
Here we are using the refinitiv Data Library version 1.1.1
In [138]:
class index_imp_vola_and_greeks_IPA_calc():  # All about Type Hints here: https://realpython.com/python-type-checking/#static-type-checking

    def __init__(  # Constroctor
        self,
        index_underlying: str = ".STOXX50E"
    ):

        self.index_underlying: str = index_underlying
        # self.expiryYearOfInterest: int = datetime.now().year
        # self.graphStyle: str = 'without out of trading hours'  # 'overlay', '3 graphs', 'simple'
        # self.graphTemplate: str = 'plotly_dark'
        # self.debug: bool = False
        # self.returnDfGraph: bool = False
        # # def change_attrs(self, **kwargs):  for kwarg in kwargs:    self.__setattr__(kwarg, kwargs[kwarg])

    def get_exp_dates(
        self,
        year: int = datetime.now().year,
        days: bool = True,
        mcal_get_calendar: str = 'EUREX'
    ) -> Dict[int, Union[dtdate, str]]:
        '''
        get_exp_dates Version 4.0:

        This function gets expiration dates for a year for NDX options, which are the 3rd Fridays of each month.

        Changes
        ----------------------------------------------
        Changed from Version 1.0 to 2.0: Jonathan Legrand changed Haykaz Aramyan's original code to allow
            (i) for the function's holiday argument to be changed, and defaulted to 'EUREX' as opposed to 'CBOE_Index_Options' and
            (ii) for the function to output full date objects as opposed to just days of the month if agument days=True.

        Changed from Version 2.0 to 3.0: Jonathan Legrand changed this function to reflec the fact that it can be used for indexes other than EUREX.

        Changed from Version 3.0 to 4.0: Jonathan Legrand changed this function to be in line with PEP 3107 (type hints).

        Dependencies
        ----------------------------------------------
        Python library 'pandas_market_calendars' version 3.2

        Parameters
        -----------------------------------------------
        Input:
            year(int): year for which expiration days are requested

            mcal_get_calendar(str): String of the calendar for which holidays have to be taken into account. More on this calendar (link to Github checked 2022-10-11): https://github.com/rsheftel/pandas_market_calendars/blob/177e7922c7df5ad249b0d066b5c9e730a3ee8596/pandas_market_calendars/exchange_calendar_cboe.py
                Default: mcal_get_calendar='EUREX'

            days(bool): If True, only days of the month is outputed, else it's dataeime objects
                Default: days=True

        Output
        -----------------------------------------------
            dates(dict): dictionary of expiration days for each month of a specified year in datetime.date format.
        '''

        i: int  # this is for the 'for loop' in this function coming below

        # get CBOE market holidays
        Cal: mcal.get_calendar = mcal.get_calendar(mcal_get_calendar)
        holidays: Tuple[np.datetime64, ...] = Cal.holidays().holidays

        # set calendar starting from Saturday
        c: calendar.Calendar = calendar.Calendar(firstweekday=calendar.SATURDAY)

        # get the 3rd Friday of each month
        exp_dates: dict = {}  # https://stackoverflow.com/questions/48054521/indicating-multiple-value-in-a-dict-for-type-hints
        date: dtdate
        for i in range(1, 13):
            date = c.monthdatescalendar(year, i)[2][-1]
            # check if found date is an holiday and get the previous date if it is
            if date in holidays:
                date = date + timedelta(-1)
            # append the date to the dictionary
            if year in exp_dates and days:
                exp_dates[year].append(date.day)
            elif year in exp_dates:
                exp_dates[year].append(date)
            elif days:
                exp_dates[year] = [date.day]
            else:
                exp_dates[year] = [date]

        return exp_dates

    def search_index_opt_ATM(
        self,
        debug: bool = False,
        after: int = 15,
        call_or_put: str = 'Put',
        searchFields: List[str] = ["ExchangeCode", "UnderlyingQuoteName"],
        include_weekly_opts: bool = False,
        topNuSearchResults: int = 10_000,
        timeOfCalcDatetime: datetime = datetime.now(),  # Here we allow for historical analysis.
        underMrktPriceField: str = "TRDPRC_1"
    ) -> index_imp_vola_and_greeks_IPA_calc:

        self.after = after
        self.timeOfCalcDatetime = timeOfCalcDatetime
        self.underMrktPriceField = underMrktPriceField
        i: int; j: dtdate; k: str  # this is for the 'for loop' in this function coming below

        self.exchangeC: str; self.exchangeRIC: str; self.mcalGetCalendar: str
        if self.index_underlying == ".STOXX50E":
            self.exchangeC, self.exchangeRIC, self.mcalGetCalendar = 'EUX', 'STX', 'EUREX'
        elif self.index_underlying == '.SPX':
            self.exchangeC, self.exchangeRIC, self.mcalGetCalendar = 'OPQ', 'SPX', 'CBOE_Futures'  # 'CBOE_Index_Options'  # should be 'CBOE_Index_Options'... CBOT_Equity

        timeOfCalcStr: str=timeOfCalcDatetime.strftime('%Y-%m-%d')
        fullDatesAtTimeOfCalc: dict = self.get_exp_dates(
            year=timeOfCalcDatetime.year,
            days=False,
            mcal_get_calendar=self.mcalGetCalendar)
        fullDatesAtTimeOfCalcDatetime: List[datetime] = [
            datetime(j.year, j.month, j.day)
            for j in fullDatesAtTimeOfCalc[
                list(fullDatesAtTimeOfCalc.keys())[0]]]
        expiryDateOfInt: datetime = [
            j for j in fullDatesAtTimeOfCalcDatetime
            if j > timeOfCalcDatetime + relativedelta(days=self.after)][0]

        if debug: print(f"expiryDateOfInt: {expiryDateOfInt}")

        # Certain search fields are nessesary for the next steps, so let's add them to the `searchFields` object:
        for k in ['DocumentTitle', 'RIC', 'StrikePrice', 'UnderlyingQuoteRIC'][::-1]:  # the `[::-1]` reverses the list
            searchFields.insert(0, k)

        # Now let's build our Search filter:
        _filter: str = f"RCSAssetCategoryLeaf eq 'Option' \
                        and RIC eq '{self.exchangeRIC}*' \
                        and CallPutOption eq '{call_or_put}' \
                        and ExchangeCode eq '{self.exchangeC}' \
                        and ExpiryDate ge {(expiryDateOfInt - relativedelta(days=1)).strftime('%Y-%m-%d')} \
                        and ExpiryDate lt {(expiryDateOfInt + relativedelta(days=1)).strftime('%Y-%m-%d')}"
        if not include_weekly_opts:
            _filter += " and DocumentTitle ne '*Weekly*'"

        response1 = search.Definition(
            view=search.Views.SEARCH_ALL,  # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
            query=self.index_underlying,
            select=', '.join(map(str, searchFields)),
            filter=_filter,  # ge (greater than or equal to), gt (greater than), lt (less than) and le (less than or equal to). These can only be applied to numeric and date properties.
            top=topNuSearchResults,
        ).get_data()
        self.searchDf: pd.DataFrame = response1.data.df
        searchDf: pd.DataFrame = self.searchDf

        if debug:
            print("searchDf")
            display(searchDf)

        try:
            self.underlyingPrice: str = rd.get_history(
                universe=[self.index_underlying],
                fields=[underMrktPriceField],
                interval="tick").iloc[-1][0]
        except:
            print("Function failed at the search strage, returning the following dataframe: ")
            display(searchDf)

        if debug:
            print(f"Underlying {self.index_underlying}'s price recorded here was {self.underlyingPrice}")
            display(searchDf.iloc[(searchDf.StrikePrice-self.underlyingPrice).abs().argsort()[:10]])

        self.instrument: str = searchDf.iloc[(
            searchDf.StrikePrice-self.underlyingPrice).abs().argsort()[:1]].RIC.values[0]
        self.instrumentInfo: pd.DataFrame = searchDf.iloc[(
            searchDf.StrikePrice-self.underlyingPrice).abs().argsort()[:1]]
        self.ATMOpt = self.instrument

        return self

    def IPA_calc(
        self,
        dateBack: int = 3,
        optnMrktPriceField: str = "TRDPRC_1",
        debug: bool = False,
        atOptionTradeOnly: bool = True,
        riskFreeRatePrct: Union[str, None] = None,
        riskFreeRatePrctField: Union[str, None] = None,
        timeZoneInGraph: datetime = datetime.now().astimezone(),
        requestFields: List[str] = [
            "DeltaPercent", "GammaPercent", "RhoPercent",
            "ThetaPercent", "VegaPercent"],
        searchBatchMax: int = 100
    ) -> index_imp_vola_and_greeks_IPA_calc:

        i: int  # Type Hinted for loops coming up below.
        k: str  # Type Hinted for loops coming up below.
        n: int  # Type Hinted for loops coming up below.
        m: int  # Type Hinted for loops coming up below.
        p: rdf._base_definition.BaseDefinition  # Type Hinted for loops coming up below.
        self.dateBack: int = dateBack
        self.start: dtdate = self.timeOfCalcDatetime - pd.tseries.offsets.BDay(
            self.dateBack)
        self.startStr: str = (self.timeOfCalcDatetime - pd.tseries.offsets.BDay(
            self.dateBack)).strftime('%Y-%m-%dT%H:%M:%S.%f')  # e.g.: '2022-10-05T07:30:00.000'
        self.endStr: str = self.timeOfCalcDatetime.strftime('%Y-%m-%dT%H:%M:%S.%f')

        _optnMrktPrice: pd.DataFrame = rd.get_history(
            universe=[self.instrument],
            fields=[optnMrktPriceField],
            interval="10min",
            start=self.startStr,  # Ought to always start at 4 am for OPRA exchanged Options, more info in the article below
            end=self.endStr)  # Ought to always end at 8 pm for OPRA exchanged Options, more info in the article below

        if _optnMrktPrice.empty:
            print(f"No data could be found for {self.instrument}, please check it on Refinitiv Workspace")

        if debug:
            print(self.instrument)
            display(_optnMrktPrice)

        # get a datapoint every 10 min
        optnMrktPrice: pd.DataFrame = _optnMrktPrice.resample(
            '10Min').mean()
        # Only keep trading days
        self.optnMrktPrice: pd.DataFrame = optnMrktPrice[
            optnMrktPrice.index.strftime('%Y-%m-%d').isin(
                [k for k in _optnMrktPrice.index.strftime('%Y-%m-%d').unique()])]
        # Forward Fill to populate NaN values
        self.optnMrktPrice.fillna(method='ffill', inplace=True)

        # Note also that one may want to only look at 'At Option Trade' datapoints,
        # i.e.: Implied Volatility when a trade is made for the Option, but not when
        # none is made. For this, we will use the 'At Trade' (`AT`) dataframes:
        if atOptionTradeOnly:
            self.AToptnMrktPrice: pd.DataFrame = _optnMrktPrice

        self.underlying: str = self.searchDf.iloc[
            (self.searchDf.StrikePrice).abs().argsort()[
                :1]].UnderlyingQuoteRIC.values[0][0]

        _underlyingMrktPrice: pd.DataFrame = rd.get_history(
            universe=[self.underlying],
            fields=[self.underMrktPriceField],
            interval="10min",
            start=self.startStr,
            end=self.endStr)

        # Let's put it al in one data-frame, `df`. Some datasets will have data
        # going from the time we set for `startStr` all the way to `endStr`. Some won't
        # because no trade happened in the past few minutes/hours. We ought to base
        # ourselves on the dataset with values getting closer to `end` and `ffill`
        # for the other column. As a result, the following `if` loop is needed:
        if optnMrktPrice.index[-1] >= _underlyingMrktPrice.index[-1]:
            df: pd.DataFrame = self.optnMrktPrice.copy()
            df[f"underlying {self.underlying} {self.underMrktPriceField}"] = _underlyingMrktPrice
        else:
            df = _underlyingMrktPrice.copy()
            df.rename(
                columns={self.underMrktPriceField:
                         f"underlying {self.underlying} {self.underMrktPriceField}"},
                inplace=True)
            df[self.underMrktPriceField] = self.optnMrktPrice
            df.columns.name = self.optnMrktPrice.columns.name
        df.fillna(method='ffill', inplace=True)  # Forward Fill to populate NaN values
        selfdf: pd.DataFrame = df.dropna()

        if atOptionTradeOnly:
            ATunderlyingMrktPrice: pd.DataFrame = self.AToptnMrktPrice.join(
                _underlyingMrktPrice,
                rsuffix=f"_{self.underlying}_underlying",
                lsuffix=f"_{self.instrument}_OptPr",
                how='inner')

        self.strikePrice: pd.DataFrame = self.searchDf.iloc[
            (self.searchDf['StrikePrice']-self.underlyingPrice).abs().argsort()[
                :1]].StrikePrice.values[0]

        # I didn't think that I needed to Type Hint for the event when
        # `_riskFreeRatePrct` & `_riskFreeRatePrctField` were `None`, but Error Messages
        # suggest otherwise...
        _riskFreeRatePrct: Union[str, None]
        _riskFreeRatePrctField: Union[str, None]
        if riskFreeRatePrct is None and self.index_underlying == ".SPX":
            _riskFreeRatePrct, _riskFreeRatePrctField = 'USDCFCFCTSA3M=', 'TR.FIXINGVALUE'
        elif riskFreeRatePrct is None and self.index_underlying == ".STOXX50E":
            _riskFreeRatePrct, _riskFreeRatePrctField = 'EURIBOR3MD=', 'TR.FIXINGVALUE'
        elif riskFreeRatePrct is not None:
            _riskFreeRatePrct, _riskFreeRatePrctField = riskFreePrctRate, riskFreeRatePrctField
        self.riskFreeRatePrct: Union[str, None] = riskFreeRatePrct
        self.riskFreeRatePrctField: Union[str, None] = riskFreeRatePrctField

        _RfRatePrct: pd.DataFrame = rd.get_history(
            universe=[_riskFreeRatePrct],  # USD3MFSR=, USDSOFR=
            fields=[_riskFreeRatePrctField],
            # Since we will use `dropna()` as a way to select the rows we are after later on in the code, we need to ask for more risk-free data than needed, just in case we don't have enough:
            start=(self.start - timedelta(days=1)).strftime('%Y-%m-%d'),  # https://teamtreehouse.com/community/local-variable-datetime-referenced-before-assignment
            end=(self.timeOfCalcDatetime +
                 timedelta(days=1)).strftime('%Y-%m-%d'))

        self.RfRatePrct: pd.DataFrame = _RfRatePrct.resample(
            '10Min').mean().fillna(method='ffill')
        df['RfRatePrct'] = self.RfRatePrct
        self.df: pd.DataFrame = df.fillna(method='ffill')

        if atOptionTradeOnly:
            pd.options.mode.chained_assignment = None  # default='warn'
            ATunderlyingMrktPrice['RfRatePrct'] = [
                pd.NA for i in ATunderlyingMrktPrice.index]
            for i in self.RfRatePrct.index:
                _i: str = str(i)[:10]
                for n, m in enumerate(ATunderlyingMrktPrice.index):
                    if _i in str(m):
                        if len(self.RfRatePrct.loc[i].values) == 2:
                            ATunderlyingMrktPrice[
                                'RfRatePrct'].iloc[n] = self.RfRatePrct.loc[i].values[0][0]
                        elif len(self.RfRatePrct.loc[i].values) == 1:
                            ATunderlyingMrktPrice[
                                'RfRatePrct'].iloc[n] = self.RfRatePrct.loc[i].values[0]
            self.ATdf: pd.DataFrame = ATunderlyingMrktPrice.copy().fillna(method='ffill')  # This is in case there were no Risk Free datapoints released after a certain time, but trades on the option still went through.

        if timeZoneInGraph != 'GMT':
            if atOptionTradeOnly:
                self.ATdf.index = [
                    self.ATdf.index[i].replace(
                        tzinfo=pytz.timezone(
                            'GMT')).astimezone(
                        tz=datetime.now().astimezone().tzinfo)
                    for i in range(len(self.ATdf))]
            else:
                df.index = [
                    df.index[i].replace(
                        tzinfo=pytz.timezone(
                            'GMT')).astimezone(
                        tz=timeZoneInGraph.tzinfo)
                    for i in range(len(df))]

        # Define our message to the calculation endpoint in the RDP (Refinitiv Data Platform) API, `atOptionTradeOnly`:
        self.universeL: List[rdf._base_definition.BaseDefinition]
        if atOptionTradeOnly:
            self.universeL = [
                option.Definition(
                    underlying_type=option.UnderlyingType.ETI,
                    buy_sell='Buy',
                    instrument_code=self.instrument,
                    strike=float(self.strikePrice),
                    pricing_parameters=option.PricingParameters(
                        market_value_in_deal_ccy=float(
                            self.ATdf[
                                f"{optnMrktPriceField}_{self.instrument}_OptPr"][i]),
                        risk_free_rate_percent=float(self.ATdf['RfRatePrct'][i]),
                        underlying_price=float(
                            self.ATdf[
                                f"{self.underMrktPriceField}_{self.underlying}_underlying"][i]),
                        pricing_model_type='BlackScholes',
                        volatility_type='Implied',
                        underlying_time_stamp='Default',
                        report_ccy='EUR'))
                for i in range(len(self.ATdf.index))]
        else:
            self.universeL = [
                option.Definition(
                    underlying_type=option.UnderlyingType.ETI,
                    buy_sell='Buy',
                    instrument_code=self.instrument,
                    strike=float(self.strikePrice),
                    pricing_parameters=option.PricingParameters(
                        market_value_in_deal_ccy=float(df[optnMrktPriceField][i]),
                        risk_free_rate_percent=float(df.RfRatePrct[i]),
                        underlying_price=float(
                            df[f"underlying {self.underlying} {self.underMrktPriceField}"][i]),
                        pricing_model_type='BlackScholes',
                        volatility_type='Implied',
                        underlying_time_stamp='Default',
                        report_ccy='EUR'))
                for i in range(len(df.index))]

        # We would like to keep a minimum of these fields in the Search Responce in order to construct following graphs:
        for k in ["MarketValueInDealCcy", "RiskFreeRatePercent",
                  "UnderlyingPrice", "Volatility"][::-1]:
            requestFields.insert(0, k)
        self.requestFields: List[str] = requestFields

        for i, p in enumerate(
            [self.universeL[i:i+searchBatchMax]
             for i in range(0, len(self.universeL), searchBatchMax)]):  # This list chunks our `universeL` in batches of `searchBatchMax`

            _IPADf: pd.DataFrame = rdf.Definitions(
                universe=p, fields=requestFields).get_data().data.df
            if i == 0:
                self.IPADf: pd.DataFrame = _IPADf
            else:
                self.IPADf: pd.DataFrame = self.IPADf.append(
                    _IPADf, ignore_index=True)

        if atOptionTradeOnly:
            self.IPADf.index = self.ATdf.index
        else:
            self.IPADf.index = self.df.index

        self.atOptionTradeOnly: bool = atOptionTradeOnly

        return self

    def Simple_graph(
        self,
        maxColwidth: int = 200,
        size: Tuple[int, int] = (15, 5),
        lineStyle: str = '.-',  # 'o-'
        plotting: str = 'Volatility',
        displayIndexInfo: bool = False
    ) -> index_imp_vola_and_greeks_IPA_calc:

        # We are going to want to show details about data retreived in a dataframe in the output of this function. The one line below allows us to maximise the width (column) length of cells to see all that is written within them.
        if displayIndexInfo:
            pd.options.display.max_colwidth = maxColwidth
            display(self.instrumentInfo)
        IPADfSimpleGraph: pd.DataFrame = pd.DataFrame(
            data=self.IPADf[[plotting]].values,
            index=self.IPADf[[plotting]].index)
        fig, axes = plt.subplots(ncols=1, figsize=size)
        axes.plot(IPADfSimpleGraph, lineStyle)
        if self.atOptionTradeOnly:
            axes.set_title(f"{self.instrument} {plotting} At Trade Only")
        else:
            axes.set_title(f"{self.instrument} {plotting}")

        self.plt = plt

        return self

    def Graph(
        self,
        include: Union[None, List[str]] = None,
        graphTemplate: str = 'plotly_dark',
        debug: bool=False
    ) -> index_imp_vola_and_greeks_IPA_calc:

        if include is None:
            include = self.requestFields

        self.IPADfGraph = self.IPADf[include]
        if debug: display(self.IPADfGraph)
        self.fig = px.line(self.IPADfGraph)

        # # Seems like the below (comented out) is resolved. Leaving it for future debugging if needed.
        # try:  # This is needed in case there is not enough data to calculate values for all timestamps , see https://stackoverflow.com/questions/67244912/wide-format-csv-with-plotly-express
        #     self.IPADfGraph = self.IPADf[include]
        #     if debug: display(self.IPADfGraph)
        #     self.fig = px.line(self.IPADfGraph)
        # except:
        #     try:
        #         print(f"Not all fields could be graphed: {include}")
        #         self.IPADfGraph = self.IPADfGraph[
        #             ["Volatility", "MarketValueInDealCcy",
        #              "RiskFreeRatePercent", "UnderlyingPrice"]]
        #         self.fig = px.line(self.IPADfGraph)
        #     except:
        #         print(f"Not all fields could be graphed: ['Volatility', 'MarketValueInDealCcy', 'RiskFreeRatePercent', 'UnderlyingPrice']")
        #         self.IPADfGraph = self.IPADfGraph[
        #             ["Volatility", "MarketValueInDealCcy",
        #              "RiskFreeRatePercent", "UnderlyingPrice"]]
        #         self.fig = px.line(self.IPADfGraph)

        self.graphTemplate = graphTemplate

        return self

    def Overlay(
        self
    ) -> index_imp_vola_and_greeks_IPA_calc:

        self.fig.update_layout(
            title=self.instrument,
            template=self.graphTemplate)
        self.fig.for_each_trace(
            lambda t: t.update(
                visible=True if t.name in self.IPADfGraph.columns[:1] else "legendonly"))

        return self

    def Stack3(
        self,
        autosize: bool = False,
        width: int = 1300,
        height: int = 500
    ) -> index_imp_vola_and_greeks_IPA_calc:

        self.fig = plotly.subplots.make_subplots(rows=3, cols=1)

        self.fig.add_trace(go.Scatter(
            x=self.IPADf.index, y=self.IPADfGraph.Volatility,
            name='Op Imp Volatility'), row=1, col=1)
        self.fig.add_trace(go.Scatter(
            x=self.IPADf.index, y=self.IPADfGraph.MarketValueInDealCcy,
            name='Op Mk Pr'), row=2, col=1)
        self.fig.add_trace(go.Scatter(
            x=self.IPADf.index, y=self.IPADfGraph.UnderlyingPrice,
            name=self.underlying+' Undrlyg Pr'), row=3, col=1)

        self.fig.update(layout_xaxis_rangeslider_visible=False)
        self.fig.update_layout(title=self.IPADfGraph.columns.name)
        self.fig.update_layout(
            title=self.instrument,
            template=self.graphTemplate,
            autosize=autosize,
            width=width,
            height=height)

        return self
<cell>281: error: Name "riskFreePrctRate" is not defined  [name-defined]
<cell>385: error: Attribute "IPADf" already defined on line 383  [no-redef]

index_imp_vola_and_greeks_IPA_calc Overlay Test¶

In [139]:
from functools import wraps
from time import time
In [140]:
index_imp_vola_and_greeks_IPA_calc().search_index_opt_ATM().ATMOpt
Out[140]:
'STXE43500Q3.EX'
In [141]:
# index_imp_vola_and_greeks_IPA_calc().search_index_opt_ATM().IPA_calc().Graph(debug=True).Overlay().fig.show()
s = time()
t1 = index_imp_vola_and_greeks_IPA_calc()
print(time() - s)
0.0
In [142]:
s = time()
t2 = t1.search_index_opt_ATM()
print(time() - s)
1.7816781997680664
In [143]:
t1.ATMOpt
Out[143]:
'STXE43500Q3.EX'
In [144]:
index_imp_vola_and_greeks_IPA_calc().search_index_opt_ATM().ATMOpt
Out[144]:
'STXE43500Q3.EX'
In [145]:
index_imp_vola_and_greeks_IPA_calc().search_index_opt_ATM().ATMOpt
Out[145]:
'STXE43500Q3.EX'
In [146]:
s = time()
t3 = t2.IPA_calc()
print(time() - s)
8.981327056884766
In [147]:
s = time()
t3 = t2.Graph(debug=True)
print(time() - s)
MarketValueInDealCcy RiskFreeRatePercent UnderlyingPrice Volatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
2023-04-21 15:10:00+02:00 74.1 3.261 4388.53 17.27818 -0.481951 0.002073 -1.379468 -2.393523 4.347241
2023-04-21 16:10:00+02:00 66.4 3.261 4399.91 16.737174 -0.457794 0.002126 -1.311095 -2.304466 4.341625
2023-04-21 16:20:00+02:00 65.4 3.261 4402.48 16.77626 -0.452394 0.002117 -1.296228 -2.298778 4.338143
2023-04-21 16:40:00+02:00 68.0 3.261 4398.0 16.903316 -0.462018 0.002108 -1.32326 -2.327254 4.343873
2023-04-21 16:50:00+02:00 68.5 3.261 4397.3 16.943839 -0.463529 0.002104 -1.327558 -2.333574 4.344537
2023-04-21 17:00:00+02:00 65.6 3.261 4401.89 16.760751 -0.453621 0.00212 -1.299589 -2.299472 4.339001
2023-04-21 17:10:00+02:00 64.0 3.261 4403.72 16.582383 -0.449438 0.00214 -1.287494 -2.275224 4.335721
2023-04-21 17:20:00+02:00 64.1 3.261 4403.87 16.620992 -0.449185 0.002134 -1.286897 -2.278411 4.335545
2023-04-24 09:30:00+02:00 62.0 3.288 4397.64 15.491573 -0.461153 0.0023 -1.316977 -2.191139 4.342712
2023-04-24 09:40:00+02:00 61.5 3.288 4399.42 15.564664 -0.457166 0.002287 -1.306127 -2.191278 4.340484
2023-04-24 11:30:00+02:00 60.0 3.288 4402.1 15.499408 -0.450926 0.002291 -1.288643 -2.174238 4.33599
2023-04-24 11:50:00+02:00 60.0 3.288 4400.41 15.323005 -0.454516 0.00232 -1.298118 -2.163911 4.338564
2023-04-24 12:10:00+02:00 59.0 3.288 4402.69 15.329996 -0.449244 0.002315 -1.283515 -2.155264 4.33446
2023-04-24 14:30:00+02:00 57.4 3.288 4403.78 15.073365 -0.446162 0.002351 -1.274262 -2.125503 4.331455
2023-04-24 14:40:00+02:00 56.4 3.288 4406.06 15.075945 -0.440813 0.002346 -1.259423 -2.115878 4.325981
2023-04-24 14:50:00+02:00 56.1 3.288 4404.73 14.870728 -0.44344 0.002381 -1.266154 -2.101416 4.328564
2023-04-24 16:00:00+02:00 53.4 3.288 4406.12 14.388323 -0.438792 0.002456 -1.251937 -2.047303 4.322912
2023-04-24 16:30:00+02:00 53.8 3.288 4400.96 13.950732 -0.450583 0.002546 -1.283461 -2.027607 4.334443
2023-04-24 16:40:00+02:00 56.2 3.288 4400.91 14.499133 -0.451843 0.00245 -1.288453 -2.081537 4.335935
2023-04-24 16:50:00+02:00 56.6 3.288 4400.84 14.584084 -0.452179 0.002436 -1.289618 -2.09014 4.33627
2023-04-24 17:00:00+02:00 57.6 3.288 4400.12 14.739414 -0.454211 0.002412 -1.295677 -2.108346 4.337933
2023-04-24 17:20:00+02:00 58.0 3.288 4399.96 14.814857 -0.454727 0.0024 -1.297314 -2.116358 4.338359
2023-04-25 12:20:00+02:00 69.2 3.268 4373.55 14.426663 -0.519721 0.002488 -1.475922 -2.174705 4.326107
2023-04-25 14:50:00+02:00 67.3 3.268 4376.04 14.285014 -0.513851 0.002513 -1.459354 -2.154586 4.332155
2023-04-25 15:40:00+02:00 70.0 3.268 4372.86 14.528601 -0.52115 0.00247 -1.48014 -2.185821 4.324407
2023-04-25 16:50:00+02:00 69.0 3.268 4376.03 14.676222 -0.512987 0.002446 -1.45804 -2.190046 4.332592
2023-04-25 17:00:00+02:00 67.1 3.268 4379.11 14.599772 -0.505583 0.002459 -1.437408 -2.173745 4.338619
2023-04-25 17:10:00+02:00 65.0 3.268 4383.55 14.627478 -0.49464 0.002453 -1.407271 -2.161876 4.344624
2023-04-25 17:20:00+02:00 66.6 3.268 4380.14 14.604245 -0.503043 0.002458 -1.430411 -2.170903 4.340309
2023-04-26 09:10:00+02:00 82.0 3.242 4353.78 14.898058 -0.566098 0.002385 -1.604748 -2.262763 4.244659
2023-04-26 09:30:00+02:00 86.4 3.242 4337.91 13.712729 -0.613928 0.002523 -1.732603 -2.18267 4.102322
2023-04-26 10:20:00+02:00 93.5 3.242 4345.11 16.445507 -0.576754 0.002154 -1.638079 -2.408451 4.213518
2023-04-26 10:40:00+02:00 84.9 3.242 4352.09 15.355841 -0.567468 0.002314 -1.609732 -2.305105 4.240268
0.1157522201538086
In [148]:
s = time()
t4 = t3.Overlay().fig.show()
print(time() - s)
0.02027606964111328

index_imp_vola_and_greeks_IPA_calc Simple Graph Test¶

In [149]:
index_imp_vola_and_greeks_IPA_calc().search_index_opt_ATM().IPA_calc().Graph(debug=True).Stack3().fig.show()
MarketValueInDealCcy MarketValueInDealCcy RiskFreeRatePercent RiskFreeRatePercent UnderlyingPrice UnderlyingPrice Volatility Volatility MarketValueInDealCcy MarketValueInDealCcy ... RiskFreeRatePercent UnderlyingPrice UnderlyingPrice Volatility Volatility DeltaPercent GammaPercent RhoPercent ThetaPercent VegaPercent
2023-04-21 15:10:00+02:00 74.1 74.1 3.261 3.261 4388.53 4388.53 17.27818 17.27818 74.1 74.1 ... 3.261 4388.53 4388.53 17.27818 17.27818 -0.481951 0.002073 -1.379468 -2.393523 4.347241
2023-04-21 16:10:00+02:00 66.4 66.4 3.261 3.261 4399.91 4399.91 16.737174 16.737174 66.4 66.4 ... 3.261 4399.91 4399.91 16.737174 16.737174 -0.457794 0.002126 -1.311095 -2.304466 4.341625
2023-04-21 16:20:00+02:00 65.4 65.4 3.261 3.261 4402.48 4402.48 16.77626 16.77626 65.4 65.4 ... 3.261 4402.48 4402.48 16.77626 16.77626 -0.452394 0.002117 -1.296228 -2.298778 4.338143
2023-04-21 16:40:00+02:00 68.0 68.0 3.261 3.261 4398.0 4398.0 16.903316 16.903316 68.0 68.0 ... 3.261 4398.0 4398.0 16.903316 16.903316 -0.462018 0.002108 -1.32326 -2.327254 4.343873
2023-04-21 16:50:00+02:00 68.5 68.5 3.261 3.261 4397.3 4397.3 16.943839 16.943839 68.5 68.5 ... 3.261 4397.3 4397.3 16.943839 16.943839 -0.463529 0.002104 -1.327558 -2.333574 4.344537
2023-04-21 17:00:00+02:00 65.6 65.6 3.261 3.261 4401.89 4401.89 16.760751 16.760751 65.6 65.6 ... 3.261 4401.89 4401.89 16.760751 16.760751 -0.453621 0.00212 -1.299589 -2.299472 4.339001
2023-04-21 17:10:00+02:00 64.0 64.0 3.261 3.261 4403.72 4403.72 16.582383 16.582383 64.0 64.0 ... 3.261 4403.72 4403.72 16.582383 16.582383 -0.449438 0.00214 -1.287494 -2.275224 4.335721
2023-04-21 17:20:00+02:00 64.1 64.1 3.261 3.261 4403.87 4403.87 16.620992 16.620992 64.1 64.1 ... 3.261 4403.87 4403.87 16.620992 16.620992 -0.449185 0.002134 -1.286897 -2.278411 4.335545
2023-04-24 09:30:00+02:00 62.0 62.0 3.288 3.288 4397.64 4397.64 15.491573 15.491573 62.0 62.0 ... 3.288 4397.64 4397.64 15.491573 15.491573 -0.461153 0.0023 -1.316977 -2.191139 4.342712
2023-04-24 09:40:00+02:00 61.5 61.5 3.288 3.288 4399.42 4399.42 15.564664 15.564664 61.5 61.5 ... 3.288 4399.42 4399.42 15.564664 15.564664 -0.457166 0.002287 -1.306127 -2.191278 4.340484
2023-04-24 11:30:00+02:00 60.0 60.0 3.288 3.288 4402.1 4402.1 15.499408 15.499408 60.0 60.0 ... 3.288 4402.1 4402.1 15.499408 15.499408 -0.450926 0.002291 -1.288643 -2.174238 4.33599
2023-04-24 11:50:00+02:00 60.0 60.0 3.288 3.288 4400.41 4400.41 15.323005 15.323005 60.0 60.0 ... 3.288 4400.41 4400.41 15.323005 15.323005 -0.454516 0.00232 -1.298118 -2.163911 4.338564
2023-04-24 12:10:00+02:00 59.0 59.0 3.288 3.288 4402.69 4402.69 15.329996 15.329996 59.0 59.0 ... 3.288 4402.69 4402.69 15.329996 15.329996 -0.449244 0.002315 -1.283515 -2.155264 4.33446
2023-04-24 14:30:00+02:00 57.4 57.4 3.288 3.288 4403.78 4403.78 15.073365 15.073365 57.4 57.4 ... 3.288 4403.78 4403.78 15.073365 15.073365 -0.446162 0.002351 -1.274262 -2.125503 4.331455
2023-04-24 14:40:00+02:00 56.4 56.4 3.288 3.288 4406.06 4406.06 15.075945 15.075945 56.4 56.4 ... 3.288 4406.06 4406.06 15.075945 15.075945 -0.440813 0.002346 -1.259423 -2.115878 4.325981
2023-04-24 14:50:00+02:00 56.1 56.1 3.288 3.288 4404.73 4404.73 14.870728 14.870728 56.1 56.1 ... 3.288 4404.73 4404.73 14.870728 14.870728 -0.44344 0.002381 -1.266154 -2.101416 4.328564
2023-04-24 16:00:00+02:00 53.4 53.4 3.288 3.288 4406.12 4406.12 14.388323 14.388323 53.4 53.4 ... 3.288 4406.12 4406.12 14.388323 14.388323 -0.438792 0.002456 -1.251937 -2.047303 4.322912
2023-04-24 16:30:00+02:00 53.8 53.8 3.288 3.288 4400.96 4400.96 13.950732 13.950732 53.8 53.8 ... 3.288 4400.96 4400.96 13.950732 13.950732 -0.450583 0.002546 -1.283461 -2.027607 4.334443
2023-04-24 16:40:00+02:00 56.2 56.2 3.288 3.288 4400.91 4400.91 14.499133 14.499133 56.2 56.2 ... 3.288 4400.91 4400.91 14.499133 14.499133 -0.451843 0.00245 -1.288453 -2.081537 4.335935
2023-04-24 16:50:00+02:00 56.6 56.6 3.288 3.288 4400.84 4400.84 14.584084 14.584084 56.6 56.6 ... 3.288 4400.84 4400.84 14.584084 14.584084 -0.452179 0.002436 -1.289618 -2.09014 4.33627
2023-04-24 17:00:00+02:00 57.6 57.6 3.288 3.288 4400.12 4400.12 14.739414 14.739414 57.6 57.6 ... 3.288 4400.12 4400.12 14.739414 14.739414 -0.454211 0.002412 -1.295677 -2.108346 4.337933
2023-04-24 17:20:00+02:00 58.0 58.0 3.288 3.288 4399.96 4399.96 14.814857 14.814857 58.0 58.0 ... 3.288 4399.96 4399.96 14.814857 14.814857 -0.454727 0.0024 -1.297314 -2.116358 4.338359
2023-04-25 12:20:00+02:00 69.2 69.2 3.268 3.268 4373.55 4373.55 14.426663 14.426663 69.2 69.2 ... 3.268 4373.55 4373.55 14.426663 14.426663 -0.519721 0.002488 -1.475922 -2.174705 4.326107
2023-04-25 14:50:00+02:00 67.3 67.3 3.268 3.268 4376.04 4376.04 14.285014 14.285014 67.3 67.3 ... 3.268 4376.04 4376.04 14.285014 14.285014 -0.513851 0.002513 -1.459354 -2.154586 4.332155
2023-04-25 15:40:00+02:00 70.0 70.0 3.268 3.268 4372.86 4372.86 14.528601 14.528601 70.0 70.0 ... 3.268 4372.86 4372.86 14.528601 14.528601 -0.52115 0.00247 -1.48014 -2.185821 4.324407
2023-04-25 16:50:00+02:00 69.0 69.0 3.268 3.268 4376.03 4376.03 14.676222 14.676222 69.0 69.0 ... 3.268 4376.03 4376.03 14.676222 14.676222 -0.512987 0.002446 -1.45804 -2.190046 4.332592
2023-04-25 17:00:00+02:00 67.1 67.1 3.268 3.268 4379.11 4379.11 14.599772 14.599772 67.1 67.1 ... 3.268 4379.11 4379.11 14.599772 14.599772 -0.505583 0.002459 -1.437408 -2.173745 4.338619
2023-04-25 17:10:00+02:00 65.0 65.0 3.268 3.268 4383.55 4383.55 14.627478 14.627478 65.0 65.0 ... 3.268 4383.55 4383.55 14.627478 14.627478 -0.49464 0.002453 -1.407271 -2.161876 4.344624
2023-04-25 17:20:00+02:00 66.6 66.6 3.268 3.268 4380.14 4380.14 14.604245 14.604245 66.6 66.6 ... 3.268 4380.14 4380.14 14.604245 14.604245 -0.503043 0.002458 -1.430411 -2.170903 4.340309
2023-04-26 09:10:00+02:00 82.0 82.0 3.242 3.242 4353.78 4353.78 14.898058 14.898058 82.0 82.0 ... 3.242 4353.78 4353.78 14.898058 14.898058 -0.566098 0.002385 -1.604748 -2.262763 4.244659
2023-04-26 09:30:00+02:00 86.4 86.4 3.242 3.242 4337.91 4337.91 13.712729 13.712729 86.4 86.4 ... 3.242 4337.91 4337.91 13.712729 13.712729 -0.613928 0.002523 -1.732603 -2.18267 4.102322
2023-04-26 10:20:00+02:00 93.5 93.5 3.242 3.242 4345.11 4345.11 16.445507 16.445507 93.5 93.5 ... 3.242 4345.11 4345.11 16.445507 16.445507 -0.576754 0.002154 -1.638079 -2.408451 4.213518
2023-04-26 10:40:00+02:00 84.9 84.9 3.242 3.242 4352.09 4352.09 15.355841 15.355841 84.9 84.9 ... 3.242 4352.09 4352.09 15.355841 15.355841 -0.567468 0.002314 -1.609732 -2.305105 4.240268

33 rows × 21 columns

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [149], in <cell line: 1>()
----> 1 index_imp_vola_and_greeks_IPA_calc().search_index_opt_ATM().IPA_calc().Graph(debug=True).Stack3().fig.show()

Input In [138], in index_imp_vola_and_greeks_IPA_calc.Graph(self, include, graphTemplate, debug)
    434 self.IPADfGraph = self.IPADf[include]
    435 if debug: display(self.IPADfGraph)
--> 436 self.fig = px.line(self.IPADfGraph)
    438 # # Seems like the below (comented out) is resolved. Leaving it for future debugging if needed.
    439 # try:  # This is needed in case there is not enough data to calculate values for all timestamps , see https://stackoverflow.com/questions/67244912/wide-format-csv-with-plotly-express
    440 #     self.IPADfGraph = self.IPADf[include]
   (...)
    454 #              "RiskFreeRatePercent", "UnderlyingPrice"]]
    455 #         self.fig = px.line(self.IPADfGraph)
    457 self.graphTemplate = graphTemplate

File c:\users\u6082174.ten\appdata\local\programs\python\python38-32\lib\site-packages\plotly\express\_chart_types.py:264, in line(data_frame, x, y, line_group, color, line_dash, symbol, hover_name, hover_data, custom_data, text, facet_row, facet_col, facet_col_wrap, facet_row_spacing, facet_col_spacing, error_x, error_x_minus, error_y, error_y_minus, animation_frame, animation_group, category_orders, labels, orientation, color_discrete_sequence, color_discrete_map, line_dash_sequence, line_dash_map, symbol_sequence, symbol_map, markers, log_x, log_y, range_x, range_y, line_shape, render_mode, title, template, width, height)
    216 def line(
    217     data_frame=None,
    218     x=None,
   (...)
    258     height=None,
    259 ) -> go.Figure:
    260     """
    261     In a 2D line plot, each row of `data_frame` is represented as vertex of
    262     a polyline mark in 2D space.
    263     """
--> 264     return make_figure(args=locals(), constructor=go.Scatter)

File c:\users\u6082174.ten\appdata\local\programs\python\python38-32\lib\site-packages\plotly\express\_core.py:1948, in make_figure(args, constructor, trace_patch, layout_patch)
   1945 layout_patch = layout_patch or {}
   1946 apply_default_cascade(args)
-> 1948 args = build_dataframe(args, constructor)
   1949 if constructor in [go.Treemap, go.Sunburst, go.Icicle] and args["path"] is not None:
   1950     args = process_dataframe_hierarchy(args)

File c:\users\u6082174.ten\appdata\local\programs\python\python38-32\lib\site-packages\plotly\express\_core.py:1405, in build_dataframe(args, constructor)
   1402     args["color"] = None
   1403 # now that things have been prepped, we do the systematic rewriting of `args`
-> 1405 df_output, wide_id_vars = process_args_into_dataframe(
   1406     args, wide_mode, var_name, value_name
   1407 )
   1409 # now that `df_output` exists and `args` contains only references, we complete
   1410 # the special-case and wide-mode handling by further rewriting args and/or mutating
   1411 # df_output
   1413 count_name = _escape_col_name(df_output, "count", [var_name, value_name])

File c:\users\u6082174.ten\appdata\local\programs\python\python38-32\lib\site-packages\plotly\express\_core.py:1222, in process_args_into_dataframe(args, wide_mode, var_name, value_name)
   1220     else:
   1221         col_name = str(argument)
-> 1222         df_output[col_name] = to_unindexed_series(df_input[argument])
   1223 # ----------------- argument is likely a column / array / list.... -------
   1224 else:
   1225     if df_provided and hasattr(argument, "name"):

File c:\users\u6082174.ten\appdata\local\programs\python\python38-32\lib\site-packages\plotly\express\_core.py:1073, in to_unindexed_series(x)
   1066 def to_unindexed_series(x):
   1067     """
   1068     assuming x is list-like or even an existing pd.Series, return a new pd.Series with
   1069     no index, without extracting the data from an existing Series via numpy, which
   1070     seems to mangle datetime columns. Stripping the index from existing pd.Series is
   1071     required to get things to match up right in the new DataFrame we're building
   1072     """
-> 1073     return pd.Series(x).reset_index(drop=True)

File c:\users\u6082174.ten\appdata\local\programs\python\python38-32\lib\site-packages\pandas\core\series.py:355, in Series.__init__(self, data, index, dtype, name, copy, fastpath)
    351 else:
    353     name = ibase.maybe_extract_name(name, data, type(self))
--> 355     if is_empty_data(data) and dtype is None:
    356         # gh-17261
    357         warnings.warn(
    358             "The default dtype for empty Series will be 'object' instead "
    359             "of 'float64' in a future version. Specify a dtype explicitly "
   (...)
    362             stacklevel=2,
    363         )
    364         # uncomment the line below when removing the DeprecationWarning
    365         # dtype = np.dtype(object)

File c:\users\u6082174.ten\appdata\local\programs\python\python38-32\lib\site-packages\pandas\core\construction.py:796, in is_empty_data(data)
    794 is_none = data is None
    795 is_list_like_without_dtype = is_list_like(data) and not hasattr(data, "dtype")
--> 796 is_simple_empty = is_list_like_without_dtype and not data
    797 return is_none or is_simple_empty

File c:\users\u6082174.ten\appdata\local\programs\python\python38-32\lib\site-packages\pandas\core\generic.py:1537, in NDFrame.__nonzero__(self)
   1535 @final
   1536 def __nonzero__(self):
-> 1537     raise ValueError(
   1538         f"The truth value of a {type(self).__name__} is ambiguous. "
   1539         "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
   1540     )

ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
In [ ]:
index_imp_vola_and_greeks_IPA_calc().search_index_opt_ATM().IPA_calc().Simple_graph().plt.show()
In [ ]:
test1 = index_imp_vola_and_greeks_IPA_calc(index_underlying=".SPX")
test2 = test1.search_index_opt_ATM(
    debug=False,
    after=15,
    call_or_put='Put',
    searchFields=["ExchangeCode", "UnderlyingQuoteName"],
    include_weekly_opts=False,
    topNuSearchResults=10,
    timeOfCalcDatetime=datetime.now(),
    underMrktPriceField="TRDPRC_1")
test3 = test2.IPA_calc(
    dateBack=3,
    optnMrktPriceField="TRDPRC_1",
    debug=False,
    atOptionTradeOnly=True,
    riskFreeRate=None,
    riskFreeRateField=None,
    timeZoneInGraph=datetime.now().astimezone())
In [ ]:
test3.IPADf
In [ ]:
test4 = test3.Simple_graph(
    maxColwidth=200,
    size=(15, 5),
    lineStyle='.-',  # 'o-'
    plotting='Volatility'
)
In [ ]:
test4.plt.show()
In [ ]:
# rd.close_session()  # It's good practice to close our RD session when done.

dash_tvlwc¶

In [ ]:
# # !pip3 install --trusted-host pypi.org dash_tvlwc
# # !pip3 install --trusted-host pypi.org dash

# import random

# import dash_tvlwc
# import dash
# from dash.dependencies import Input, Output, State
# from dash import html

# from datetime import datetime
# import random

# import pandas as pd

# def generate_random_series(*args, **kwargs):
#     return generate_random_ohlc(*args, **kwargs, close_only=True)

# def generate_random_ohlc(v0: float, ret=0.05, n=500, t0='2021-01-01', close_only=False):
#     datelist = [dt.strftime('%Y-%m-%d') for dt in pd.date_range(t0, periods=n).tolist()]
#     res = []
#     c = v0
#     for dt in datelist:
#         o = c
#         c = o * (1 + random.uniform(-ret, ret))
#         if not close_only:
#             h = max(o, c) * (1 + random.uniform(0, ret))
#             l = min(o, c) * (1 + random.uniform(-ret, 0))
#             res.append({
#                 "time": dt,
#                 "open": o,
#                 "high": h,
#                 "low": l,
#                 "close": c
#             })
#         else:
#             res.append({
#                 "time": dt,
#                 "value": c
#             })
#         o = c
#     return res

# app = dash.Dash(__name__, external_stylesheets=['./assets/stylesheet.css'])

# chart_options = {
#     'layout': {
#         'background': {'type': 'solid', 'color': '#1B2631'},
#         'textColor': 'white',
#     },
#     'grid': {
#         'vertLines': {'visible': False},
#         'horzLines': {'visible': False},
#     },
#     'localization': {'locale': 'en-US'}
# }

# panel1 = [
#     html.H2('Bar'),
#     dash_tvlwc.Tvlwc(
#         id='bar-chart',
#         seriesData=[generate_random_ohlc(v0=100, n=50)],
#         seriesTypes=['bar'],
#         width='100%',
#         chartOptions=chart_options
#     )
# ]


# p2_series = generate_random_ohlc(v0=1, n=50, ret=0.1)
# p2_series = [{'time': v['time']} if 12 < idx < 20 or idx > 45 else v for idx, v in enumerate(p2_series)]
# panel2 = [
#     html.H2('Candlestick'),
#     dash_tvlwc.Tvlwc(
#         id='candlestick-chart',
#         seriesData=[p2_series],
#         seriesTypes=['candlestick'],
#         seriesOptions=[{
#             'downColor': '#a6269a',
#             'upColor': '#ffaa30',
#             'borderColor': 'black',
#             'wickColor': 'black'
#         }],
#         width='100%',
#         chartOptions={'layout': {'background': {'type': 'solid', 'color': 'white'}}}
#     )
# ]


# panel3 = [
#     html.H2('Area'),
#     dash_tvlwc.Tvlwc(
#         id='area-chart',
#         seriesData=[generate_random_series(v0=15, n=50)],
#         seriesTypes=['area'],
#         seriesOptions=[{
#             'lineColor': '#FFAA30',
#             'topColor': '#2962FF',
#             'bottomColor': 'rgba(180, 98, 200, 0.1)',
#             'priceLineWidth': 3,
#             'priceLineColor': 'red'
#         }],
#         width='100%',
#         chartOptions=chart_options
#     )
# ]


# p4_series = generate_random_series(v0=5000, n=50)
# p4_mean = sum([p['value'] for p in p4_series]) / 50
# p4_max = max([p['value'] for p in p4_series])
# price_lines = [{'price': p4_max, 'color': '#2962FF', 'lineStyle': 0, 'title': 'MAX PRICE', 'axisLabelVisible': True}]
# panel4 = [
#     html.H2('Baseline'),
#     dash_tvlwc.Tvlwc(
#         id='baseline-chart',
#         seriesData=[p4_series],
#         seriesTypes=['baseline'],
#         seriesOptions=[{
#             'baseValue': {'type': 'price', 'price': p4_mean},
#             'topFillColor1': 'black',
#             'topFillColor2': 'rgba(255,255,255,0)',
#             'topLineColor': 'black',
#             'crosshairMarkerRadius': 8,
#             'lineWidth': 5,
#             'priceScaleId': 'left'
#         }],
#         seriesPriceLines=[price_lines],
#         width='100%',
#         chartOptions={
#             'rightPriceScale': {'visible': False},
#             'leftPriceScale': {'visible': True, 'borderColor': 'rgba(197, 203, 206, 1)',},
#             'timeScale': {'visible': False},
#             'grid': {'vertLines': {'visible': False}, 'horzLines': {'style': 0, 'color': 'black'}},
#         }
#     )
# ]


# # add markers and add color to volume bar
# p5_series = generate_random_series(v0=1, n=50, ret=0.1)
# markers = [
#     {'time': p5_series[15]['time'], 'position': 'aboveBar', 'color': '#f68410', 'shape': 'circle', 'text': 'Signal'},
#     {'time': p5_series[20]['time'], 'position': 'belowBar', 'color': 'white', 'shape': 'arrowUp', 'text': 'Buy'}
# ]
# p5_series_volume = generate_random_series(v0=100, n=50, ret=0.05)
# for i in p5_series_volume:
#     i['color'] = random.choice(['rgba(0, 150, 136, 0.8)', 'rgba(255,82,82, 0.8)'])

# panel5 = [
#     html.H2('Line and volume'),
#     dash_tvlwc.Tvlwc(
#         id='line-chart',
#         seriesData=[p5_series, p5_series_volume],
#         seriesTypes=['line', 'histogram'],
#         seriesOptions=[
#             {
#                 'lineWidth': 1
#             },
#             {
#                 'color': '#26a69a',
#                 'priceFormat': {'type': 'volume'},
#                 'priceScaleId': '',
#                 'scaleMargins': {'top': 0.9, 'bottom': 0},
#                 'priceLineVisible': False
#             },
#         ],
#         seriesMarkers=[markers],
#         width='100%',
#         chartOptions=chart_options
#     )
# ]


# p6_series = generate_random_series(v0=100, n=50, ret=0.3)
# for idx, _ in enumerate(p6_series):
#     if idx in [5,12,13,14,20,33,34,46]:
#         p6_series[idx]['color'] = 'white'

# panel6 = [
#     html.H2('Histogram'),
#     dash_tvlwc.Tvlwc(
#         id='histogram-chart',
#         seriesData=[p6_series],
#         seriesTypes=['histogram'],
#         seriesOptions=[{
#             'color': '#ff80cc',
#             'base': 100,
#             'priceLineVisible': False,
#             'lastValueVisible': False
#         }],
#         width='100%',
#         chartOptions={'layout': {'textColor': '#ff80cc', 'background': {'type': 'solid', 'color': 'black'}}}
#     )
# ]

# app.layout = html.Div([
#     html.H1('Chart options and series options'),
#     html.Div(className='container', children=[
#         html.Div(className='one', children=panel1),
#         html.Div(className='two', children=panel2),
#         html.Div(className='three', children=panel3),
#         html.Div(className='four', children=panel4),
#         html.Div(className='five', children=panel5),
#         html.Div(className='six', children=panel6),
#     ])
# ])

# if __name__ == '__main__':
#     app.run_server(debug=True)
In [ ]:
# !where python
# import os, signal
# os.kill(os.getpid(), signal.SIGTERM)

Conclusion¶

As you can see, not only can we use IPA to gather large amounts of bespoke, calculated, values, but be can also portray this insight in a simple, quick and relevent way. The last cell in particular loops through our built fundction to give an updated graph every 5 seconds using 'legacy' technologies that would work in most environments (e.g.: Eikon Codebook).

References¶

Brilliant: Black-Scholes-Merton

What is the RIC syntax for options in Refinitiv Eikon?

Functions to find Option RICs traded on different exchanges

Eikon Calc Help Page

Making your code faster: Cython and parallel processing in the Jupyter Notebook

What Happens to Options When a Stock Splits?

Select column that has the fewest NA values

Return Column(s) if they Have a certain Percentage of NaN Values (Python)

How to Split a String Between Numbers and Letters?

Q&A¶

RIC nomenclature for expired Options on Futures

Expiration Dates for Expired Options API

Measure runtime of a Jupyter Notebook code cellMeasure runtime of a Jupyter Notebook code cell

What does these parameters mean in jupyter notebook when I input "%%time"?What does these parameters mean in jupyter notebook when I input "%%time"?

Appendix¶

No Intra Day Price Data Is Available For Old Expired Options¶

In [ ]:
# import refinitiv.data as rd
# rd.get_config().set_param(
#     param=f"logs.transports.console.enabled", value=True
# )
# session = rd.open_session("desktop.workspace")
# session.set_log_level("DEBUG")
In [ ]:
# SPX_test2OptnMrktPrice10min = rd.get_history(
#     universe=list(SPX_test2['valid_ric'][0].keys())[0],
#     fields=["TRDPRC_1"],
#     interval="10min",
#     start='2021-10-25T11:53:09.168706',
#     end='2021-10-26T19:53:10.166926')  # Ought to always end at 8 pm for OPRA exchanged Options, more info in the article below
# SPX_test2OptnMrktPrice
In [ ]:
# SPX_test2OptnMrktPrice1d = rd.content.historical_pricing.summaries.Definition(
#     universe=list(SPX_test2['valid_ric'][0].keys())[0],
#     start='2021-10-23',
#     end='2021-10-29',
#     fields=['BID', 'ASK', 'TRDPRC_1'],
#     interval=rd.content.historical_pricing.Intervals.DAILY).get_data()
In [ ]:
SPX_test2OptnMrktPrice1d.data.df
In [ ]:
# SPX_test2OptnMrktPriceTest = rd.content.historical_pricing.summaries.Definition(
#     interval=rd.content.historical_pricing.Intervals.FIVE_MINUTES,
# #     interval='PT1H',
#     universe=list(SPX_test2['valid_ric'][0].keys())[0],
#     start='2021-10-25T10:53:09.168706',
#     end='2021-10-27T19:53:10.166926',
#     fields=['BID', 'ASK', 'TRDPRC_1']).get_data()
In [ ]:
# SPX_test2OptnMrktPriceTest.errors
In [ ]:
# SPX_test2OptnMrktPriceTest.get_data().data.df
In [ ]:
# SPX_test2OptnMrktPriceTest
In [ ]:
ek = rd.eikon
# The key is placed in a text file so that it may be used in this code without showing it itself:
eikon_key = open("eikon.txt", "r")
ek.set_app_key(str(eikon_key.read()))
# It is best to close the files we opened in order to make sure that we don't stop any other services/programs from accessing them if they need to:
eikon_key.close()
In [ ]:
testDf, err = ek.get_data(
    instruments=list(SPX_test2['valid_ric'][0].keys())[0],
    fields=['TRDPRC_1'],
    parameters={
        'SDate': '2021-10-25T10:53:09.168706',
        'EDate': '2021-10-28T19:53:10.166926'})
In [ ]:
testDf

Equity Option Price Adjustments to Corporate Actions¶

We're lucky in that indexes's prices are adjusted automatically to corporate actions, so we do't have to take them into account in our calculations. However, we would have to if we focussed on Equities. If this is the case you're interested in, don't hesitate to use the below which should work with the functions above:

In [ ]:
def Get_trans_days(year, mcal_get_calendar='EUREX', trans_day='first'):
    '''
    Get_trans_days Version 2.0:

    This function gets transaction days for each month of a specified year.

    Changes
    ----------------------------------------------
    Changed from Version 1.0 to 2.0: Jonathan Legrand changed Haykaz Aramyan's original code to allow
        (i) function name changed from `get_trans_days` to `Get_trans_days`
        (ii) for the function's holiday argument to be changed, allowing for any calendar supported by `mcal.get_calendar` and defaulted to 'EUREX' as opposed to 'CBOE_Index_Options' and

    Dependencies
    ----------------------------------------------
    import datetime.timedelta as timedelta. (This is a native Python library, so it ought to be version '3.8.12'.)
    Python library 'pandas_market_calendars' version '3.2'.
    pandas_market_calendars as mcal version '4.1.0'

    Parameters
    -----------------------------------------------
    Arguments:
        year (int):
            Year for which transaction days are requested

        mcal_get_calendar(str):
            String of the calendar for which holidays have to be taken into account. More on this calendar (link to Github checked 2022-10-11): https://github.com/rsheftel/pandas_market_calendars/blob/177e7922c7df5ad249b0d066b5c9e730a3ee8596/pandas_market_calendars/exchange_calendar_cboe.py
            Default: mcal_get_calendar='EUREX'

        trans_day (str):
            Takes either 'first' or 'third' indicating to the first business day or the 3rd Friday of a month respectively
            Default: trans_day='first'

    Output:
        trans_days (list):
            List of days for 12 month
    '''
    # get the first business day of each month
    if trans_day == 'first':
        mkt = mcal.get_calendar(mcal_get_calendar)
        holidays = mkt.holidays().holidays

        # set start and end day ranges
        start_date, end_date = f"{str(year)}-01-01", f"{str(year)}-12-31"
        trans_days = []

        for date in pd.date_range(start_date, end_date, freq='BMS'):
            # get the first day after the weekend after checking for holiday
            while date.isoweekday() > 5 or date in holidays:
                date += timedelta(1)
            # add found day to the list
            trans_days.append(date.date().day)

    # get the 3rd Friday for each month by calling function "get_exp_dates"
    elif trans_day == 'third':
        trans_days = get_exp_dates(year)[year]
    else:
        print('Please input "first" or "third" for transaction day')
        return
    return trans_days
In [ ]:
def Adjustment_factor(corp_event, year=None, date=None, trans_day='first', mcal_get_calendar='EUREX'):
    '''
    Adjustment_factor Version 2.0:

    This function gets adjustment factor(s) of stock split for a given asset. If no split event is happened during the requested period
    function returns 1(if date argument is used) or list of twelve 1s (if year argument is used), which assumes no adjustment in prices.

    Changes
    ----------------------------------------------
    Changed from Version 1.0 to 2.0: Jonathan Legrand changed Haykaz Aramyan's original code:
        (i) function name changed from `adjustment_factor` to `Adjustment_factor`.

    Dependencies
    ----------------------------------------------
    Python library 'pandas_market_calendars' version 3.2

    Parameters
    -----------------------------------------------
    Input:
        asset (str):
            RIC code of the asset
        year (int):
            Year for which stock split events are requested
        date (str with date (YYYY-MM-DD) format):
            Date as of which stock split events are requested
        trans_day (str, default = 'first'):
            Indicates the date of the transaction for get_trans_days function
        mcal_get_calendar(str):
            String of the calendar for which holidays have to be taken into account. More on this calendar (link to Github checked 2022-10-11): https://github.com/rsheftel/pandas_market_calendars/blob/177e7922c7df5ad249b0d066b5c9e730a3ee8596/pandas_market_calendars/exchange_calendar_cboe.py
            Default: mcal_get_calendar='EUREX'
    Output:
        adj_factor (float): This is returned in case of date argument is used. The output is the Adjustment factor after split
        adj_factors(list): This is returned in case of year argument is used. The output is the list of Adjustment factors after split for each month
    '''
    # if there is no stock split corporate event
    if (corp_event is None) or (corp_event['Capital Change Effective Date'][0] is None):
        if year is not None and date is None:
            # return list of 1s if year argument is used
            adj_factors = 12 * [1]
            return adj_factors
        elif date is not None and year is None:
            # return 1 if exact date argument is used
            adj_factor = 1
            return adj_factor
        else:
            print('Either Year or exact date needs to be passed to the function')
    # if there is an event adjustment factor(s) is(are) calculated
    else:
        if year is not None and date is None:  # in case of year argument is used
            # request transaction dates
            trans_days = Get_trans_days(year=year, mcal_get_calendar='EUREX', trans_day=trans_day)
            adj_factors = []
            for i in range(1, 13):
                # get exp_dates and use it as a request date for stock split corporate events
                exp_date = str(year) + '-' + str(i) + '-' + str(trans_days[i - 1])
                # initiate adj_factor with 1
                adj_factor = 1
                # we first check if the expiration date of option is after or before the adjustment date
                for j in reversed(range(len(corp_event))):
                    # if expiration date is smaller than adjustment date then we need adjustment
                    if pd.to_datetime(exp_date).strftime('%Y-%m-%d') < pd.to_datetime(corp_event['Capital Change Effective Date'][j]).strftime('%Y-%m-%d'):
                        adj_factor = float(corp_event['Adjustment Factor'][j]) * adj_factor # we should consider all adjustment factors which are after the expiration day
                # append adjustment factor of the month to the list
                adj_factors.append(adj_factor)
            return adj_factors

        elif date is not None and year is None:  # In case exact date  argument is ued
            adj_factor = 1
            for j in reversed(range(len(corp_event))):
                # if expiration date is smaller than adjustment date then we need adjustment
                if pd.to_datetime(date).strftime('%Y-%m-%d') < corp_event['Capital Change Effective Date'][j]:
                    adj_factor = float(corp_event['Adjustment Factor'][j]) * adj_factor
            return adj_factor
        else:
            print('Either Year or exact date needs to be passed to the function')
In [ ]:
def Get_potential_rics(year, trans_day, asset, OTM_size, diff, opt_type, reportLog=True, debug=False):
    '''
    Get_potential_rics Version 2.0:
    This function returns the list of potential option RICs for a specified year reconstructed based on Refinitiv RIC and option trading rules.

    Changes
    ----------------------------------------------
    Changed from Version 1.0 to 2.0: Jonathan Legrand changed Haykaz Aramyan's original code:
        (i) function name changed from `get_potential_rics` to `Get_potential_rics`
        (ii) changed function body to reflect changed funciton name from `get_trans_days` to `Get_trans_days`
        (iii) added argument `reportLog`, a boolean value that, if left to default `True`, report log of the function output

    Dependencies
    ----------------------------------------------
    Python library 'Refinitiv Dataplatform' version 1.0.0a8.post1

    Parameters
    -----------------------------------------------
    Input:
        year (int): year for which transaction days are requested
        trans_day (str, default = 'first'): takes either 'first' or 'third' indicating to the first business day or the 3rd Friday of a month respectively
        asset (str): RIC code of the asset
        OTM_size (int): percentage number indicating how far away is the strike price from the price of the underlying asset
        diff (int): Tolarated difference in OTM to construct upper and lower bounds of strike prices
        opt_type (str): takes either "call" or "put"
        reportLog (bool): report log of the function output if True. Default: reportLog=True.

    Output:
        Tuple of two objects:
            potential_RICs (dict): dictionary containing potential RICs for each month with strike prices from the lower to upper bounds of strikes
            strikes (list): list of the strike prices calculated based on OTM size for each month
    '''

    # open file to report log of the function output
    if reportLog:
        report = open("Log report.txt", "a")

    # call functions to get expiration and transaction days
    trans_days = Get_trans_days(
        year=year, mcal_get_calendar='EUREX', trans_day=trans_day)
    trans_days_prev = Get_trans_days(
        year=year-1, mcal_get_calendar='EUREX', trans_day=trans_day)
    dates = get_exp_dates(year)

    # trim underlying asset's RIC to get the required part for option RIC
    if asset[0] == '.':  # check if the asset is an index or an equity
        asset_name = asset[1:]  # get the asset name - we remove "." symbol for index options
        adj_factors = 12 * [1]  # set adjustment factors to be equal to 1 for each month (no stock split corporate event is applicable to indices)
    else:
        asset_name = asset.split('.')[0]  # we need only the first part of the RICs for equities
        # get list of corporate events for equities
        if debug: print(f"asset: {asset}")
        corp_event = rd.get_data(
            universe=asset,
            fields=["TR.CAEffectiveDate", "TR.CAAdjustmentFactor", "TR.CAAdjustmentType"],
            parameters={
                "CAEventType": "SSP",
                "SDate": datetime.today().strftime("%Y-%m-%d"),
                "EDate": "-50Y"})
        if debug: print(f"corp_event: {corp_event}")
        # run adjustment_factor function to get the factors
        adj_factors = Adjustment_factor(corp_event, year=year, trans_day=trans_day)

    # define expiration month codes to be used after "^" sign
    exp = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L']
    potential_RICs = {}
    strikes = []

    # construct potential RICs for each month of a specified year
    for j in range(1, 13):
        # get day of expiration for a month
        day = dates[year][j - 1]
        # get date of price request, which is in the previous month of expiration 
        if j != 1:
            date = str(year) + '-' + str(j - 1) + '-' + str(trans_days[j - 2])
        if j == 1:  # for January, we need to subtract a year along with the month
            date = str(year - 1) + '-' + str(j + 11) + '-' + str(trans_days_prev[j + 10])
        # get price of underlying asset as of the transaction date

        # get the corresponding adjustment factor for the month
        adj_factor = adj_factors[j-1]

        price = rd.get_data(
            asset,
            fields=['TR.PriceClose'],
            parameters={'SDate': date})
        price = float(price.iloc[0, 1]) / adj_factor  # adjust prices by the adjustment factor. if no sptick split events adj_factor = 1

        # calculate the strike price for call options
        if opt_type.lower() == 'call':
            strike = price + price * OTM_size / 100
            # define expiration month codes for call options while also considering the strike price
            if strike > 999.999:
                exp_codes_call = [
                    'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l']
            else:
                exp_codes_call = [
                    'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L']
            # get expiration month code for a month
            exp_month = exp_codes_call[j-1]

        # calculate the strike price and get expiration month code for a month for put options
        elif opt_type.lower() == 'put':
            strike = price - price * OTM_size/100
            if strike > 999.999:
                exp_codes_put = [
                    'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x']
            else:
                exp_codes_put = [
                    'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X']
            exp_month = exp_codes_put[j-1]

        strikes.append(int(round(strike, 0)))  # append the calculated strike price to the list of strikes

        # calculate lower and upper bounds for strikes considering the value of the strike
        if strike > 999.999:
            step = 5  # we loop over strikes with a step 5 for larger strikes
            strike_ub = int(round((strike + strike * diff / 100),-1))
            strike_lb = int(round((strike - strike * diff / 100),-1))
        else:
            step = 1  # we loop over strikes with a step 1 for smaller strikes
            strike_ub = int(strike + strike * diff / 100)
            strike_lb = int(strike - strike * diff / 100)

        # construct RICs for each strike from the lower to upper bound ranges of strikes
        for n in range(strike_lb, strike_ub + step, step):
            k = None  # for strikes < 1000 along with 1 step increment change in strikes we do 0.5 point increment change which 
                      # allows us to consider strikes with decimal points. This is important to get closer OTMs for smaller valued assets.
            # here we construct option RICs by adding together all the RIC components
            # Please note some of the components are different depending on the strike value
            plc_holdr1 = asset_name + exp_month + str(day) + str(year)[-2:]
            plc_holdr2 = exp[j - 1] + str(year)[-2:]
            if n < 10:
                z = plc_holdr1 + '00' + str(n) + '00.U^' + plc_holdr2# for integer steps
                k = plc_holdr1 + '00' + str(n) + '50.U^' + plc_holdr2# for decimal steps
            elif n >= 10 and n < 100:
                z = plc_holdr1 + '0' + str(n) + '00.U^' + plc_holdr2
                k = plc_holdr1 + '0' + str(n) + '50.U^' + plc_holdr2
            if n >= 100 and n < 1000:
                z = plc_holdr1 + str(n) + '00.U^' + plc_holdr2  
                k = plc_holdr1 + str(n) + '50.U^' + plc_holdr2
            elif n >= 1000 and n < 10000:
                z = plc_holdr1 + str(n) + '0.U^' + plc_holdr2
            elif n >= 10000 and n < 20000:
                z = plc_holdr1 + 'A' + str(n)[-4:] + '.U^' + plc_holdr2
            elif n >= 20000 and n < 30000:
                z = plc_holdr1 + 'B' + str(n)[-4:] + '.U^' + plc_holdr2
            elif n >= 30000 and n < 40000:
                z = plc_holdr1 + 'C' + str(n)[-4:] + '.U^' + plc_holdr2
            elif n >= 40000 and n < 50000:
                z = plc_holdr1 + 'D' + str(n)[-4:] + '.U^' + plc_holdr2

            # append RICs with integer strikes to the dictionary
            if j in potential_RICs:
                potential_RICs[j].append(z)
                # append RICs with decimal point strikes to the dictionary
                if k is not None:
                    potential_RICs[j].append(k)
            else:
                potential_RICs[j] = [z]
                if k is not None:
                    potential_RICs[j].append(k)

    # report funtion results and close the log file
    if reportLog:
        now = {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
        report.write(f'{now}: Potential RICs for {opt_type} options with {OTM_size}% OTM for {year} are constructed\n')
        report.close()
    return potential_RICs, strikes
In [ ]:
call_RICs, call_strikes = Get_potential_rics(
    year=2017,
    trans_day='first',
    asset='.STOXX50E',
    OTM_size=5,
    diff=3,
    opt_type='call',
    reportLog=True,
    debug=False)
In [ ]:
print(call_strikes)
In [ ]:
rd.get_data(
    '.STOXX50E',
    fields=['TR.CAAdjustmentFactor(SDate=2017-01-01,EDate=2018-12-31)'])

Troubleshooting IPA¶

I then gathered data on two Options, a live one ('STXE42000D3.EX') and an expired one ('HSI19300N3.HF^B23').

In [ ]:
import refinitiv.data as rd
import refinitiv.data.content.ipa.financial_contracts as rdf
from refinitiv.data.content.ipa.financial_contracts import option
import pandas as pd

# Let's authenticate ourseves to LSEG's Data and Analytics service, Refinitiv:
try:  # The following libraries are not available in Codebook, thus this try loop
    rd.open_session(config_name="C:\\Example.DataLibrary.Python-main\\Example.DataLibrary.Python-main\\Configuration\\refinitiv-data.config.json")
    rd.open_session("desktop.workspace")
except:
    rd.open_session()
In [ ]:
def Chunks(lst, n):
    """Yield successive n-sized chunks from lst."""
    for i in range(0, len(lst), n):
        yield lst[i:i + n]
In [ ]:
HSI_test0 = rd.content.historical_pricing.summaries.Definition(
    'STXE42000D3.EX',
    interval=rd.content.historical_pricing.Intervals.DAILY,
    fields=['SETTLE'],
    start='2022-11-07',
    end='2023-02-01').get_data().data.df
hk_rf = 100 - rd.get_history(
    universe=['HK3MT=RR'],  # HK10YGB=EODF, HKGOV3MZ=R, HK3MT=RR
    fields=['TR.MIDPRICE'],
    start=HSI_test0.index[0].strftime('%Y-%m-%d'),
    end=HSI_test0.index[-1].strftime('%Y-%m-%d'))
HSI_test1 = pd.merge(
    HSI_test0, hk_rf, left_index=True, right_index=True)
HSI_test1 = HSI_test1.rename(
    columns={"SETTLE": "OptionPrice", "Mid Price": "RfRatePrct"})
hist_HSI_undrlying_pr = rd.get_history(
    universe=[HSI_underlying_RIC],
    fields=["TRDPRC_1"],
    # interval="1D",
    start=HSI_test0.index[0].strftime('%Y-%m-%d'),
    end=HSI_test0.index[-1].strftime('%Y-%m-%d'))
HSI_test2 = pd.merge(HSI_test1, hist_HSI_undrlying_pr,
                    left_index=True, right_index=True)
HSI_test2 = HSI_test2.rename(
    columns={"TRDPRC_1": "UndrlyingPr"})
HSI_test2.columns.name = 'STXE42000D3.EX'
HSI_test2

Delivery Layer¶

In [ ]:
requestFields = ['MarketValueInDealCcy', 'RiskFreeRatePercent', 'UnderlyingPrice', 'PricingModelType', 'DividendType', 'UnderlyingTimeStamp', 'ReportCcy', 'VolatilityType', 'Volatility', 'DeltaPercent', 'GammaPercent', 'RhoPercent', 'ThetaPercent', 'VegaPercent']
Live Option¶
In [ ]:
live_universe = [
        {
          "instrumentType": "Option",
          "instrumentDefinition": {
            "buySell": "Buy",
            "underlyingType": "Eti",
            "instrumentCode": 'STXE42000D3.EX',
            "strike": float(4200),
          },
          "pricingParameters": {
            "marketValueInDealCcy": float(HSI_test2['OptionPrice'][i]),
            "riskFreeRatePercent": float(HSI_test2['RfRatePrct'][i]),
            "underlyingPrice": float(HSI_test2['UndrlyingPr'][i]),
            "pricingModelType": "BlackScholes",
            "dividendType": "ImpliedYield",
            "volatilityType": "Implied",
            "underlyingTimeStamp": "Default",
            "reportCcy": "HKD"
          }
        }
      for i in range(len(HSI_test2.index))]
In [ ]:
batchOf = 100
for i, j in enumerate(Chunks(live_universe, 100)):
    print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(live_universe, batchOf)]))} started")
    # Example request with Body Parameter - Symbology Lookup
    live_troubleshoot_request_definition = rd.delivery.endpoint_request.Definition(
        method=rd.delivery.endpoint_request.RequestMethod.POST,
        url='https://api.refinitiv.com/data/quantitative-analytics/v1/financial-contracts',
        body_parameters={"fields": requestFields,
                         "outputs": ["Data", "Headers"],
                         "universe": j})

    live_troubleshoot_resp = live_troubleshoot_request_definition.get_data()
    headers_name = [h['name'] for h in live_troubleshoot_resp.data.raw['headers']]

    if i == 0:
        live_troubleshoot_df = pd.DataFrame(
            data=live_troubleshoot_resp.data.raw['data'],
            columns=headers_name)
    else:
        _live_troubleshoot_df = pd.DataFrame(
            data=live_troubleshoot_resp.data.raw['data'],
            columns=headers_name)
        live_troubleshoot_df = live_troubleshoot_df.append(_live_troubleshoot_df, ignore_index=True)
    print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(live_universe, batchOf)]))} ended")
In [ ]:
live_troubleshoot_df
Expired Option¶
In [ ]:
hist_universe = [
        {
          "instrumentType": "Option",
          "instrumentDefinition": {
            "buySell": "Buy",
            "underlyingType": "Eti",
            "instrumentCode": 'HSI19300N3.HF^B23',
            "strike": float(4200),
          },
          "pricingParameters": {
            "marketValueInDealCcy": float(HSI_test2['OptionPrice'][i]),
            "riskFreeRatePercent": float(HSI_test2['RfRatePrct'][i]),
            "underlyingPrice": float(HSI_test2['UndrlyingPr'][i]),
            "pricingModelType": "BlackScholes",
            "dividendType": "ImpliedYield",
            "volatilityType": "Implied",
            "underlyingTimeStamp": "Default",
            "reportCcy": "HKD"
          }
        }
      for i in range(len(HSI_test2.index))]
In [ ]:
for i, j in enumerate(Chunks(hist_universe, 100)):
    print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(hist_universe, batchOf)]))} started")
    # Example request with Body Parameter - Symbology Lookup
    hist_troubleshoot_request_definition = rd.delivery.endpoint_request.Definition(
        method=rd.delivery.endpoint_request.RequestMethod.POST,
        url='https://api.refinitiv.com/data/quantitative-analytics/v1/financial-contracts',
        body_parameters={"fields": requestFields,
                         "outputs": ["Data", "Headers"],
                         "universe": j})

    hist_troubleshoot_resp = hist_troubleshoot_request_definition.get_data()
    headers_name = [h['name'] for h in hist_troubleshoot_resp.data.raw['headers']]

    if i == 0:
        hist_troubleshoot_df = pd.DataFrame(
            data=hist_troubleshoot_resp.data.raw['data'],
            columns=headers_name)
    else:
        _hist_troubleshoot_df = pd.DataFrame(
            data=hist_troubleshoot_resp.data.raw['data'],
            columns=headers_name)
        hist_troubleshoot_df = hist_troubleshoot_df.append(_hist_troubleshoot_df, ignore_index=True)
    print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(hist_universe, batchOf)]))} ended")
In [ ]:
hist_troubleshoot_df

Content Layer¶

In [ ]:
live_hist_daily_universe_l = [
    option.Definition(
        underlying_type=option.UnderlyingType.ETI,
        buy_sell='Buy',
        instrument_code='STXE42000D3.EX',  # 'STXE42000D3.EX' #  'HSI19300N3.HF^B23',  # list(HSI_test2['valid_ric'][0].keys())[0],
        strike=float(4200),
        pricing_parameters=option.PricingParameters(
            market_value_in_deal_ccy=float(HSI_test2['OptionPrice'][i]),
            risk_free_rate_percent=float(HSI_test2['RfRatePrct'][i]),
            underlying_price=float(HSI_test2['UndrlyingPr'][i]),
            pricing_model_type='BlackScholes',
            volatility_type='Implied',
            underlying_time_stamp='Default',
            report_ccy='HKD'
        ))
    for i in range(len(HSI_test2.index))]
In [ ]:
for i, j in enumerate(Chunks(live_hist_daily_universe_l, 100)):
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(hist_daily_universe_l, 100)])} started")
    # Example request with Body Parameter - Symbology Lookup
    troubleshoot_resp_live = rdf.Definitions(universe=j, fields=requestFields)
    troubleshoot_resp_live_getdata = troubleshoot_resp_live.get_data()
    if i == 0:
        troubleshoot_resp_live_df = troubleshoot_resp_live_getdata.data.df
    else:
        troubleshoot_resp_live_df = troubleshoot_resp_live_df.append(
            troubleshoot_resp_live_getdata.data.df, ignore_index=True)
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(hist_daily_universe_l, 100)])} ended")
In [ ]:
troubleshoot_resp_live_df
In [ ]:
exp_hist_daily_universe_l = [
    option.Definition(
        underlying_type=option.UnderlyingType.ETI,
        buy_sell='Buy',
        instrument_code='HSI19300N3.HF^B23',  # 'STXE42000D3.EX' #  'HSI19300N3.HF^B23'
        strike=float(hist_opt_found_strk_pr),
        pricing_parameters=option.PricingParameters(
            market_value_in_deal_ccy=float(HSI_test2['OptionPrice'][i]),
            risk_free_rate_percent=float(HSI_test2['RfRatePrct'][i]),
            underlying_price=float(HSI_test2['UndrlyingPr'][i]),
            pricing_model_type='BlackScholes',
            volatility_type='Implied',
            underlying_time_stamp='Default',
            report_ccy='HKD'
        ))
    for i in range(len(HSI_test2.index))]
In [ ]:
for i, j in enumerate(Chunks(exp_hist_daily_universe_l, 100)):
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(exp_hist_daily_universe_l, 100)])} started")
    # Example request with Body Parameter - Symbology Lookup
    troubleshoot_resp_exp = rdf.Definitions(universe=j, fields=requestFields)
    troubleshoot_resp_exp_getdata = troubleshoot_resp_exp.get_data()
    if i == 0:
        troubleshoot_resp_exp_df = troubleshoot_resp_exp_getdata.data.df
    else:
        troubleshoot_resp_exp_df = troubleshoot_resp_exp_df.append(
            troubleshoot_resp_exp.data.df, ignore_index=True)
    print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(exp_hist_daily_universe_l, 100)])} ended")
In [ ]:
troubleshoot_resp_exp_df
[Error 400 - invalid_grant] empty error description
In [ ]:
 
In [ ]: